Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.7%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: ljang0
  • License: apache-2.0
  • Language: HTML
  • Default Branch: main
  • Size: 52.2 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 2 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks

[Website] [Paper]

VisualWebArena is a realistic and diverse benchmark for evaluating multimodal autonomous language agents. It comprises of a set of diverse and complex web-based visual tasks that evaluate various capabilities of autonomous multimodal agents. It builds off the reproducible, execution based evaluation introduced in WebArena.

Overview

TODOs

  • [ ] Add example scripts to run HuggingFace models.
  • [ ] Add scripts for end-to-end training and reset of environments.
  • [x] Add demo to run multimodal agents on any arbitrary webpage.

News

  • [02/14/2024]: Added a demo script for running the GPT-4V + SoM agent on any task on an arbitrary website.
  • [01/25/2024]: GitHub repo released with tasks and scripts for setting up the VWA environments.

Install

```bash

Python 3.10+

python -m venv venv source venv/bin/activate pip install -r requirements.txt playwright install pip install -e . ```

You can also run the unit tests to ensure that VisualWebArena is installed correctly: pytest -x

End-to-end Evaluation

  1. Setup the standalone environments. Please check out this page for details.

  2. Configurate the urls for each website. bash export CLASSIFIEDS="<your_classifieds_domain>:9980" export CLASSIFIEDS_RESET_TOKEN="4b61655535e7ed388f0d40a93600254c" # Default reset token for classifieds site, change if you edited its docker-compose.yml export SHOPPING="<your_shopping_site_domain>:7770" export REDDIT="<your_reddit_domain>:9999" export WIKIPEDIA="<your_wikipedia_domain>:8888" export HOMEPAGE="<your_homepage_domain>:4399"

In addition, if you want to run on the original WebArena tasks, make sure to also set up the CMS, GitLab, and map environments, and then set their respective environment variables: bash export SHOPPING_ADMIN="<your_e_commerce_cms_domain>:7780/admin" export GITLAB="<your_gitlab_domain>:8023" export MAP="<your_map_domain>:3000"

  1. Generate config files for each test example: bash python scripts/generate_test_data.py You will see *.json files generated in the config_files folder. Each file contains the configuration for one test example.

If you want to run on the original WebArena tasks: Make sure to uncomment the line in scripts/generate_test_data.py to generate task files for config_files/test_webarena.raw.json.

  1. Obtain and save the auto-login cookies for all websites: bash prepare.sh

If you want to run on the original WebArena tasks: Make sure to uncomment lines 35-38 in browser_env/auto_login.py to create cookies for the WebArena environments.

  1. Set up API keys.

If using OpenAI models, set a valid OpenAI API key (starting with sk-) as the environment variable: export OPENAI_API_KEY=your_key

If using Gemini, first install the gcloud CLI. Configure the API key by authenticating with Google Cloud: gcloud auth login gcloud config set project <your_project_name>

  1. Launch the evaluation. For example, to reproduce our GPT-3.5 captioning baseline: bash python run.py \ --instruction_path agent/prompts/jsons/p_cot_id_actree_3s.json \ --test_start_idx 0 \ --test_end_idx 1 \ --result_dir <your_result_dir> \ --test_config_base_dir=config_files/test_classifieds \ --model gpt-3.5-turbo-1106 \ --observation_type accessibility_tree_with_captioner This script will run the first Classifieds example with the GPT-3.5 caption-augmented agent. The trajectory will be saved in <your_result_dir>/0.html. Note that the baselines that include a captioning model run on GPU by default (e.g., BLIP-2-T5XL as the captioning model will take up approximately 12GB of GPU VRAM).

GPT-4V + SoM Agent

SoM

To run the GPT-4V + SoM agent we proposed in our paper, you can run evaluation with the following flags: bash python run.py \ --instruction_path agent/prompts/jsons/p_som_cot_id_actree_3s.json \ --test_start_idx 0 \ --test_end_idx 1 \ --result_dir <your_result_dir> \ --test_config_base_dir=config_files/test_classifieds \ --model gpt-4-vision-preview \ --action_set_tag som --observation_type image_som

To run Gemini models, you can change the provider, model, and the maxobslength (as Gemini uses characters instead of tokens for inputs): bash python run.py \ --instruction_path agent/prompts/jsons/p_som_cot_id_actree_3s.json \ --test_start_idx 0 \ --test_end_idx 1 \ --max_steps 1 \ --result_dir <your_result_dir> \ --test_config_base_dir=config_files/test_classifieds \ --provider google --model gemini --mode completion --max_obs_length 15360 \ --action_set_tag som --observation_type image_som

Demo

We have also prepared a demo for you to run the agents on your own task on an arbitrary webpage.

After following the setup instructions above and setting the OpenAI API key (the other environment variables for website URLs aren't really used, so you should be able to set them to some dummy variable), you can run the GPT-4V + SoM agent with the following command: bash python run_demo.py \ --instruction_path agent/prompts/jsons/p_som_cot_id_actree_3s.json \ --start_url "https://www.amazon.com" \ --image "https://media.npr.org/assets/img/2023/01/14/this-is-fine_wide-0077dc0607062e15b476fb7f3bd99c5f340af356-s1400-c100.jpg" \ --intent "Help me navigate to a shirt that has this on it." \ --result_dir demo_test_amazon \ --model gpt-4-vision-preview \ --action_set_tag som --observation_type image_som \ --render

This tasks the agent to find a shirt that looks like the provided image (the "This is fine" dog) from Amazon. Have fun!

Citation

If you find our environment or our models useful, please consider citing VisualWebArena as well as WebArena: ``` @article{koh2024visualwebarena, title={VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks}, author={Koh, Jing Yu and Lo, Robert and Jang, Lawrence and Duvvur, Vikram and Lim, Ming Chong and Huang, Po-Yu and Neubig, Graham and Zhou, Shuyan and Salakhutdinov, Ruslan and Fried, Daniel}, journal={arXiv preprint arXiv:2401.13649}, year={2024} }

@article{zhou2024webarena, title={WebArena: A Realistic Web Environment for Building Autonomous Agents}, author={Zhou, Shuyan and Xu, Frank F and Zhu, Hao and Zhou, Xuhui and Lo, Robert and Sridhar, Abishek and Cheng, Xianyi and Bisk, Yonatan and Fried, Daniel and Alon, Uri and others}, journal={ICLR}, year={2024} } ```

Acknowledgements

Our code is heavily based off the WebArena codebase.

Owner

  • Login: ljang0
  • Kind: user

Citation (CITATION.cff)

@article{koh2024visualwebarena,
  title={VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks},
  author={Koh, Jing Yu and Lo, Robert and Jang, Lawrence and Duvvur, Vikram and Lim, Ming Chong and Huang, Po-Yu and Neubig, Graham and Zhou, Shuyan and Salakhutdinov, Ruslan and Fried, Daniel},
  journal={arXiv preprint arXiv:24xx.xxxxx},
  year={2024}
}

GitHub Events

Total
Last Year