attentivesupport
llm-based robot that intervenes only when needed
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.6%) to scientific vocabulary
Keywords
Repository
llm-based robot that intervenes only when needed
Basic Info
- Host: GitHub
- Owner: HRI-EU
- License: bsd-3-clause
- Language: Python
- Default Branch: main
- Homepage: https://hri-eu.github.io/AttentiveSupport/
- Size: 75.6 MB
Statistics
- Stars: 33
- Watchers: 2
- Forks: 3
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Attentive support
A simulation-based implementation of the attentive support robot introduced in the paper To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions. \ See the project website for an overview.
Setup
Python 3.8 - 3.11 \ Prerequisites for building the simulator workspace: g++, cmake, Libxml2, Qt5, qwt, OpenSceneGraph, Bullet Physics
Ubuntu 20
libxml2-dev, qt5-default, libqwt-qt5-dev, libopenscenegraph-dev, libbullet-dev, libasio-dev, libzmq3-dev, portaudio19-devUbuntu 22
libxml2-dev, qtbase5-dev, qt5-qmake, libqwt-qt5-dev, libopenscenegraph-dev, libbullet-dev, libasio-dev, libzmq3-dev, portaudio19-devFedora
cmake, gcc-c++, OpenSceneGraph-devel, libxml2, qwt-qt5-devel, bullet-devel, asio-devel, cppzmq-devel, python3-devel, portaudioClone this repo and change into it: git clone https://github.com/HRI-EU/AttentiveSupport.git && cd AttentiveSupport \
You can either run the setup script: bash build.sh or follow these steps:
1. Get the submodules: git submodule update --init --recursive
2. Create a build directory in the AttentiveSupport directory: mkdir -p build and change into it cd build
3. Install the smile workspace: cmake ../src/Smile/ -DCMAKE_INSTALL_PREFIX=../install; make -j; make install \
Note that if you have the Smile workspace installed somewhere else, you have to change the relative path in config.yaml accordingly. For details, check here
4. Install the Python dependencies: python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
5. Make sure you have an OpenAI API key set up, see the official instructions
6. Enjoy 🕹️
Containerized Runtime
- Build the container:
docker build -t localhost/attentive_support . - Run the container with display support and set the
OPENAI_API_KEYas environment variable.
podman:
podman run \
-e OPENAI_API_KEY=replace_me \
-e WAYLAND_DISPLAY \
--net=host \
-it \
localhost/attentive_support
docker (rootless):
docker run \
-e OPENAI_API_KEY=replace_me \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-it \
localhost/attentive_support
remote, rootless with internal ssh server:\
In certain scenarios it might not be possible to display the graphical window, e.g., when running docker rootless on a remote machine with X11.
For these scenarios, the docker image can be built with the option docker build --build-arg WITH_SSH_SERVER=true -t localhost/attentive_support ..
Include proxy settings as necessary.
Start the image:
docker run \
-it \
-p 2022:22 \
localhost/attentive_support
This starts an ssh server on port 2022 that can be accessed with username root and password hri: ssh -X root@localhost -p 2022.
Run the example script:
export RCSVIEWER_SIMPLEGRAPHICS=True
export OPENAI_API_KEY=replace_me
/usr/bin/python -i /attentive_support/src/tool_agent.py
Usage
Running the agent
- Activate the virtual environment:
source .venv/bin/activate - Run the agent in interactive mode, from the AttentiveSupport directory:
python -i src/tool_agent.py - Provide commands:
agent.plan_with_functions("Move the red glass to Felix") - Reset
- The simulation:
SIM.reset() - The agent:
agent.reset()
- The simulation:
Customizing the agent
- Change the agent's character:
- Either via the
system_promptvariable ingpt_config.py - Or directly, note that this is not persistent:
agent.character = "You are a whiny but helpful robot."
- Either via the
- Provide the agent with further tools:
- Define tools as Python functions in
tools.py - Make sure to use type hints and add docstrings in the Sphinx notation. This is important so that the
function_analyzer.pycan generate the function descriptions for openai automagically - For inspiration, check out some more examples in
src/tool_variants/extended_tools.py
- Define tools as Python functions in
- Change generic settings such as the model used and its temperature via
gpt_config.py - Note: The
gpt_config.pyfile can either be changed directly, or the filename of a custom config file can be passed to the agent when running in interactive mode:python -i src/tool_agent.py --config=custom_config
Additional features
- Stop the robot mid-action: activate the simulation window, then press
Shift + S - Setting an agent as busy:
set_busy("Daniel", "iphone5") - Enable text to speech:
enable_tts() - Speech input; start talking after executing the command and press any key (or a specified
push_key) to stop the speech input:agent.execute_voice_command_once()
Example
Running the simulation with "Move the red glass to Felix": \

For reproducing the situated interaction scenario run the following:
- agent.plan_with_functions("Felix -> Daniel: Hey Daniel, what do we have to drink?") \
Robot should do nothing because Daniel is available to answer.
- agent.plan_with_functions("Daniel -> Felix: We have two options, cola and fanta.") \
Robot should correct Daniel.
- agent.plan_with_functions("Felix -> Daniel: Daniel, please hand me the red glass.") \
Robot should help because the red glass is out of reach for Daniel
- Manually set Daniel to busy with the mobile: set_busy("Daniel", "iphone5")
- agent.plan_with_functions("Felix -> Daniel: Daniel, could you fill some coca cola into my glass?") \
Robot should help as Daniel is busy.
- agent.plan_with_functions("Daniel -> Felix: Felix, can you give me a full glass of the same, but without sugar?") \
Robot should help as Felix cannot see or reach the coke zero.
- agent.plan_with_functions("Felix -> Robot: What do you know about mixing coke and fanta?") \
Robot should answer.
Owner
- Name: Honda Research Institute Europe GmbH
- Login: HRI-EU
- Kind: organization
- Email: info@honda-ri.de
- Location: Offenbach, Germany
- Website: www.honda-ri.de
- Repositories: 44
- Profile: https://github.com/HRI-EU
GitHub Events
Total
- Issues event: 1
- Watch event: 14
- Issue comment event: 3
- Push event: 9
- Pull request review comment event: 1
- Pull request review event: 2
- Pull request event: 2
- Fork event: 1
Last Year
- Issues event: 1
- Watch event: 14
- Issue comment event: 3
- Push event: 9
- Pull request review comment event: 1
- Pull request review event: 2
- Pull request event: 2
- Fork event: 1
Dependencies
- PyYAML ==6.0.1
- black ==24.3.0
- docstring-parser ==0.16
- openai ==1.14.3
- ubuntu jammy-20240427 build