audsemthinker
GitHub Repository for the AudSemThinker Model and the AudSem Dataset
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.1%) to scientific vocabulary
Repository
GitHub Repository for the AudSemThinker Model and the AudSem Dataset
Statistics
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
AudSemThinker: Enhancing Audio-Language Models Through Reasoning over Semantics of Sound
Official repository for the AudSemThinker model and AudSem dataset. This repository contains the code and resources for the AudSemThinker project, focusing on creating a large-scale audio-language dataset from YouTube subtitles and training multimodal models for audio semantic understanding.
Project Links
| Category | Link | Description | |----------|-----------------------------------------------------------------------|--------------------------------------------------| | Paper | arXiv:2505.14142 | AudSemThinker research paper | | Datasets | gijs/audsem-simple | Simplified AudSem dataset without semantic descriptors | | | gijs/audsem | Full AudSem dataset with semantic descriptors | | Models | gijs/audsemthinker-qa | AudSemThinker model fine-tuned for QA | | | gijs/audsemthinker | General AudSemThinker model | | | gijs/audsemthinker-qa-grpo | AudSemThinker QA model with GRPO optimization | | Demo | gijs/audsemthinker | Interactive demo of the AudSemThinker model |
Repository Overview
The repository is structured into several directories, each serving a specific purpose in the dataset creation and model training pipeline:
- filtering/: Filters raw YouTube subtitle data to extract high-quality sound descriptions.
- clip_embeddings/: Converts audio and text data into embeddings for further filtering based on cosine similarity.
- metadata_evaluation/: Generates the final dataset by extracting multimodal features (audio, image, video) and combining them.
- tarify/: Packages the dataset into WebDataset tar files for efficient data loading.
- training/: Contains scripts for training models using Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO), along with evaluation scripts.
Getting Started
Environment Setup
We recommend using a Conda environment with Python 3.10:
bash
conda create -n audsemthinker python=3.10
conda activate audsemthinker
conda install pip
pip install -r requirements.txt
Running the Pipeline
Below are detailed instructions and example commands for each stage of the pipeline.
1. Filtering Stage (filtering/)
This stage processes raw YouTube subtitles to extract and refine sound event descriptions.
Download Subtitles:
bash
python filtering/download/download_pd.py --output_dir <output_dir> --other_args <values>
Preprocessing:
bash
python filtering/preprocessing/main.py --input_dir <input_dir> --output_dir <output_dir>
Classification (BERT-based and Mistral-based):
bash
python filtering/classification/main.py --input_dir <input_dir> --output_dir <output_dir>
python filtering/mistral_classification/main.py --input_dir <input_dir> --output_dir <output_dir>
Combining Results:
bash
python filtering/02_combining.py --input_dirs <classification_output_dirs> --output_dir <combined_output_dir>
python filtering/04_combination.py --input_dir <combined_output_dir> --output_dir <final_filtered_output_dir>
2. Embedding-based Filtering (clip_embeddings/)
Refines the dataset using audio and text embeddings.
bash
python clip_embeddings/prepreprocess.py --input_dir <input_dir> --output_dir <output_dir>
python clip_embeddings/preprocess_audio.py --input_dir <input_dir> --output_dir <output_dir>
python clip_embeddings/process_audio.py --input_dir <input_dir> --output_dir <output_dir>
python clip_embeddings/process_text.py --input_dir <input_dir> --output_dir <output_dir>
python clip_embeddings/postprocess_sim.py --input_dir <input_dir> --output_dir <output_dir>
python clip_embeddings/postpostprocess.py --input_dir <input_dir> --output_dir <output_dir>
3. Metadata Evaluation (metadata_evaluation/)
Extract multimodal features and generate the final dataset.
Run Feature Extractors:
bash
python metadata_evaluation/audio_model/main.py --data_dir <data_dir> --num_workers 16 --batch_size 16 --num_shards 100 --start_shard 0
python metadata_evaluation/image_model/main.py --data_dir <data_dir> --num_workers 16 --batch_size 16 --num_shards 100 --start_shard 0
python metadata_evaluation/video_model/main.py --data_dir <data_dir> --num_workers 16 --batch_size 16 --num_shards 100 --start_shard 0
Combine and Process Metadata:
Run scripts sequentially from 01 to 05:
bash
python metadata_evaluation/combine_data/01_combine_outputs.py --input_dir <input_dir> --output_dir <output_dir>
python metadata_evaluation/combine_data/02_postfiltering.py --input_dir <input_dir> --output_dir <output_dir>
python metadata_evaluation/combine_data/03_opensource_caption_creation_batch.py --input_dir <input_dir> --output_dir <output_dir>
python metadata_evaluation/combine_data/04_opensource_qa_multiple_choice_generation_batch.py --input_dir <input_dir> --output_dir <output_dir>
python metadata_evaluation/combine_data/05_get_data_files.py --input_dir <input_dir> --output_dir <output_dir>
4. Tarification (tarify/)
Package data into WebDataset tar files:
bash
python tarify/laion-tarify.py --input_dir <input_dir> --output_dir <output_dir>
5. Data Preparation for Training (filtering/preparation/)
Create WebDataset shards for training:
bash
python 01_create_webdataset.py --jsonl_path <path_to_jsonl> --output_dir <output_webdataset_dir>
python 01_create_webdataset_mc_qa.py --jsonl_path <path_to_mc_qa_jsonl> --output_dir <output_mc_qa_webdataset_dir> --semantic
python 01_create_webdataset_qa.py --jsonl_path <path_to_qa_jsonl> --output_dir <output_qa_webdataset_dir> --semantic
6. Model Training (training/)
Supervised Fine-Tuning (SFT):
bash
python main.py --shard_folder <path_to_shards> --no_debug --semantic_elements
Group Relative Policy Optimization (GRPO): ```bash
Start VLLM server
CUDAVISIBLEDEVICES=3 python -m trl.scripts.vllmserve --model <MODELPATH> --tensorparallelsize 1 &
Run GRPO training
CUDAVISIBLEDEVICES=0,1,2 accelerate launch --numprocesses 3 --usedeepspeed --zerostage 3 grpomain.py --optimization lora --modelidorpath <MODELPATH> --name
7. Evaluation (training/evaluate/ and training/AudioBench/)
MMAU Evaluation:
bash
python mmau_mini.py --model_path <model_checkpoint> --semantic_elements
python mmau_mini_omni_targetlength.py --model_path <model_checkpoint> --target_length 25
AudioBench Evaluation: ```bash
Start evaluation server
CUDAVISIBLEDEVICES=0 python -m vllm.entrypoints.openai.api_server --model casperhansen/llama-3-70b-instruct-awq --quantization awq --port 5001 &
Run evaluation
python src/mainevaluate.py --datasetname
Citation
If you use the AudSemThinker model or AudSem dataset in your research, please cite the accompanying paper:
@misc{wijngaard2025audsemthinkerenhancingaudiolanguagemodels,
title={AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound},
author={Gijs Wijngaard and Elia Formisano and Michele Esposito and Michel Dumontier},
year={2025},
eprint={2505.14142},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2505.14142},
}
Owner
- Name: Gijs Wijngaard
- Login: GlJS
- Kind: user
- Repositories: 1
- Profile: https://github.com/GlJS
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
type: software
title: "AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound"
authors:
- family-names: "Wijngaard"
given-names: "Gijs"
- family-names: "Formisano"
given-names: "Elia"
- family-names: "Esposito"
given-names: "Michele"
- family-names: "Dumontier"
given-names: "Michel"
date-released: 2025-01-01
url: "https://github.com/GLJS/AudSemThinker"
preferred-citation:
type: article
title: "AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound"
authors:
- family-names: "Wijngaard"
given-names: "Gijs"
- family-names: "Formisano"
given-names: "Elia"
- family-names: "Esposito"
given-names: "Michele"
- family-names: "Dumontier"
given-names: "Michel"
year: 2025
url: "https://arxiv.org/abs/2505.14142"
repository: "arXiv"
identifiers:
- type: other
value: "arXiv:2505.14142"
description: "arXiv preprint"
GitHub Events
Total
- Watch event: 3
- Issue comment event: 1
- Push event: 5
- Create event: 2
Last Year
- Watch event: 3
- Issue comment event: 1
- Push event: 5
- Create event: 2
Dependencies
- InstructorEmbedding *
- Pillow >=9.5.0
- aac-metrics *
- accelerate >=0.26.0
- av <13
- bitsandbytes >=0.45.2
- blobfile *
- conette *
- cuda-python *
- datasets *
- decord >=0.6.0
- deepspeed *
- einops *
- einops_exts *
- ffmpeg-python >=0.2.0
- fire *
- h5py *
- huggingface-hub *
- info-nce-pytorch >=0.1.1
- jsonlines *
- loguru *
- matplotlib *
- moviepy *
- ninja *
- numpy >=1.24.0
- nvidia-cuda-nvrtc-cu12 *
- open-clip-torch *
- openai *
- opencv-python >=4.7.0
- pandas >=2.0.0
- pathlib >=1.0.1
- peft *
- polars *
- pybind11 *
- pydantic >=2.0.0
- python-dotenv *
- pytorch-lightning *
- pytorchvideo *
- qwen-vl-utils *
- requests >=2.31.0
- resampy *
- sentence-transformers >3.4.0
- sentencepiece *
- soundfile *
- submitit *
- tenacity *
- tokenizers >=0.20.0
- torch >=2.5.1
- torchaudio >=2.5.1
- torchcodec *
- torchvision >=0.20.1
- tqdm >=4.65.0
- transformers >=4.47
- trl >=0.16.0
- vllm >0.7.2
- wandb *
- webdataset *
- wget *
- xgrammar *