naver

[ICCV] NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning

https://github.com/controlnet/naver

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

[ICCV] NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning

Basic Info
Statistics
  • Stars: 15
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed 8 months ago
Metadata Files
Readme License Citation

README.md

NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning

This repo is the official implementation for the paper NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning in ICCV 2025.

Release

  • [2025/06/28] 🔥 NAVER code is open sourced in GitHub.
  • [2025/06/25] 🎉 NAVER paper is accepted by ICCV 2025.

TODOs

We're working on the following TODOs: - [x] GUI demo. - [ ] Support more LLMs. - [ ] Video demo & slides presentation.

Installation

Requirements

  • Python >= 3.10
  • conda

Please follow the instructions below to install the required packages and set up the environment.

1. Clone this repository.

Bash git clone https://github.com/ControlNet/NAVER

2. Setup conda environment and install dependencies.

Option 1: Using pixi (recommended): Bash pixi install pixi shell

Option 2: Building from source (You may need to setup the CUDA and PyTorch manually): Bash conda install conda-forge/label/rust_dev::rust=1.78 -c conda-forge -y pip install "git+https://github.com/scallop-lang/scallop.git@f8fac18#egg=scallopy&subdirectory=etc/scallopy" pip install -e .

3. Configure the environments

Edit the file .env or setup in CLI to configure the environment variables.

``` OPENAIAPIKEY=your-api-key # if you want to use OpenAI LLMs AZUREOPENAIURL= # if you want to use Azure OpenAI LLMs OLLAMA_HOST=http://ollama.server:11434 # if you want to use your OLLaMA server for llama or deepseek

do not change this TORCH_HOME variable

TORCHHOME=./pretrainedmodels ```

4. Download the pretrained models

Run the scripts to download the pretrained models to the ./pretrained_models directory.

Bash python -m hydra_vl4ai.download_model --base_config config/refcoco.yaml --model_config config/model_config.yaml --extra_packages naver.tool

Inference

You may need 28GB vRAM to run NAVER. Consider editing the file in ./config/model_config.yaml to load the models in multiple GPUs.

Inference with GUI

You need nodejs and npm to run the GUI demo. It will automatically compile and build the frontend.

The GUI will be available at http://0.0.0.0:8000.

Bash python demo_gui.py \ --base_config <YOUR-CONFIG-DIR> \ --model_config <MODEL-CONFIG-PATH>

gui_preview

Inference with given one image and query

Bash python demo_cli.py \ --image <IMAGE_PATH> \ --query <QUERY> \ --base_config <YOUR-CONFIG-DIR> \ --model_config <MODEL-CONFIG-PATH>

The result will be printed in the console.

Inference dataset

Bash python main.py \ --data_root <YOUR-DATA-ROOT> \ --base_config <YOUR-CONFIG-DIR> \ --model_config <MODEL-CONFIG-PATH>

Then the inference results are saved in the ./result directory for evaluation.

Evaluation

Bash python evaluate.py --input <RESULT_JSONL_PATH>

The evaluation results will be printed in the console. Note the output from LLM is random, so the evaluation results may be slightly different from the paper.

Citation

If you find this work useful for your research, please consider citing it. bibtex @article{cai2025naver, title = {NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning}, author = {Cai, Zhixi and Ke, Fucai and Jahangard, Simindokht and Garcia de la Banda, Maria and Haffari, Reza and Stuckey, Peter J. and Rezatofighi, Hamid}, journal = {arXiv preprint arXiv:2502.00372}, year = {2025}, }

Owner

  • Name: ControlNet
  • Login: ControlNet
  • Kind: user

Study on: Computer Vision | Artificial Intelligence

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you find this work useful for your research, please consider citing it."
preferred-citation:
  type: article
  authors:
  - family-names: "Cai"
    given-names: "Zhixi"
  - family-names: "Ke"
    given-names: "Fucai"
  - family-names: "Jahangard"
    given-names: "Simindokht"
  - family-names: "Garcia de la Banda"
    given-names: "Maria"
  - family-names: "Haffari"
    given-names: "Reza"
  - family-names: "Stuckey"
    given-names: "Peter J."
  - family-names: "Rezatofighi"
    given-names: "Hamid"
  journal: "arXiv preprint arXiv:2502.00372"
  title: "NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning"
  year: 2025

GitHub Events

Total
  • Issues event: 2
  • Watch event: 18
  • Push event: 15
  • Public event: 1
  • Pull request event: 2
Last Year
  • Issues event: 2
  • Watch event: 18
  • Push event: 15
  • Public event: 1
  • Pull request event: 2

Dependencies

.github/workflows/release.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
  • pypa/gh-action-pypi-publish release/v1 composite
  • softprops/action-gh-release v1 composite
module_repos/detectron2/.github/actions/install_detectron2/action.yml actions
module_repos/detectron2/.github/actions/install_detectron2_win/action.yml actions
module_repos/detectron2/.github/actions/install_linux_dep/action.yml actions
module_repos/detectron2/.github/actions/install_linux_gpu_dep/action.yml actions
module_repos/detectron2/.github/actions/install_windows_dep/action.yml actions
module_repos/detectron2/.github/actions/run_unittests/action.yml actions
module_repos/detectron2/.github/actions/run_unittests_win/action.yml actions
module_repos/detectron2/.github/actions/uninstall_tests/action.yml actions
module_repos/Grounded-Segment-Anything/Dockerfile docker
  • pytorch/pytorch 1.13.1-cuda11.6-cudnn8-devel build
module_repos/detectron2/docker/Dockerfile docker
  • nvidia/cuda 11.1.1-cudnn8-devel-ubuntu18.04 build
module_repos/detectron2/docker/docker-compose.yml docker
module_repos/Grounded-Segment-Anything/GroundingDINO/pyproject.toml pypi
module_repos/Grounded-Segment-Anything/GroundingDINO/requirements.txt pypi
  • addict ==2.4.
  • numpy ==1.26.
  • opencv-python *
  • pycocotools ==2.0.
  • supervision ==0.22.
  • timm ==0.9.
  • torch *
  • torchvision *
  • transformers *
  • yapf ==0.7.
module_repos/Grounded-Segment-Anything/GroundingDINO/setup.py pypi
module_repos/Grounded-Segment-Anything/requirements.txt pypi
  • Pillow *
  • PyYAML *
  • addict *
  • diffusers *
  • fairscale *
  • gradio *
  • huggingface_hub *
  • litellm *
  • matplotlib *
  • nltk *
  • numpy *
  • onnxruntime *
  • opencv_python *
  • pycocotools *
  • requests *
  • setuptools *
  • supervision *
  • termcolor *
  • timm *
  • torch *
  • torchvision *
  • transformers *
  • yapf *
module_repos/Grounded-Segment-Anything/segment_anything/setup.py pypi
module_repos/Grounded-Segment-Anything/voxelnext_3d_box/requirements.txt pypi
  • easydict *
  • matplotlib *
  • numpy *
  • onnx *
  • onnxruntime *
  • opencv-python *
  • pycocotools *
  • pyyaml *
  • torch *
  • torchvision *
module_repos/detectron2/docs/requirements.txt pypi
  • Pillow *
  • cloudpickle *
  • docutils ==0.16
  • future *
  • hydra-core >=1.1.0.dev5
  • matplotlib *
  • numpy *
  • omegaconf >=2.1.0.dev24
  • recommonmark ==0.6.0
  • scipy *
  • sphinx ==3.2.0
  • sphinx_rtd_theme *
  • tabulate *
  • termcolor *
  • timm *
  • tqdm *
  • yacs *
module_repos/detectron2/projects/DensePose/setup.py pypi
  • av >=8.0.3
  • detectron2 *
  • opencv-python-headless >=4.5.3.56
  • scipy >=1.5.4
module_repos/detectron2/projects/TensorMask/setup.py pypi
module_repos/detectron2/setup.py pypi
  • Also ,
  • But *
  • Do *
  • In *
  • Lock *
  • NOTE *
  • Pillow >=7.1
  • The *
  • These *
  • black *
  • choosing *
  • cloudpickle *
  • dataclasses *
  • fvcore >=0.1.5,<0.1.6
  • guaranteed *
  • hydra-core >=1.1,<1.3
  • in *
  • iopath >=0.1.7,<0.1.10
  • matplotlib *
  • omegaconf >=2.1,<2.4
  • on *
  • opencv *
  • packaging *
  • pycocotools >=2.0.2
  • tabulate *
  • tensorboard *
  • termcolor >=1.1
  • to *
  • tqdm >4.29.0
  • with *
  • yacs >=0.1.8
pyproject.toml pypi
setup.py pypi
  • accelerate *
  • bbox-visualizer *
  • datasets *
  • hydra_vl4ai ==0.0.5
  • ollama *
  • openai *
  • orjson *
  • problog *
  • python-dotenv *
  • pyyaml *
  • rich *
  • scipy *
  • sentencepiece *
  • tensorneko ==0.3.21
  • timm *
  • tokenizers *
  • torch *
  • transformers *
  • word2number *