flesh_effort
This repository stores coding pipeline to process and analyze data associated with project "Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game" (FLESH).
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.7%) to scientific vocabulary
Keywords
Repository
This repository stores coding pipeline to process and analyze data associated with project "Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game" (FLESH).
Basic Info
- Host: GitHub
- Owner: sarkadava
- License: cc0-1.0
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://sarkadava.github.io/FLESH_Effort/
- Size: 22.1 GB
Statistics
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Github repository to project Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game

This repository stores coding pipeline to process and analyze data associated with project "Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game". This project investigates how people modulate their effort when they encounter communicative breakdowns in a referential game. The project is part of the FLESH project.
This project has been preregistered as a two-phase preregistration. In Phase I, we preregistered the data collection. In Phase II, we have preregistered the analysis plan, including the processing steps.
Updates
[] Preregistration of data collection
[] Data collection completed
[] Preregistration of analysis and processing steps
[] Preprint published
[] Manuscript published
[] Data available at open access repository
Overview
The pipeline consists of several processing and analysis steps, whereby each step works on the output of the previous step. However, they are build in modular way such that one can implement individual scripts for their own purposes.
You can browse through the pipeline as a website.
The pipeline is divided into the following steps:
Pre-processing I: From XDF to raw files
Motion tracking I: Preparation of videos
Motion tracking II: 2D pose estimation via OpenPose
Motion tracking III: Triangulation via Pose2sim
Motion tracking IV: Modeling inverse kinematics and dynamics
Processing I: Motion tracking and balance
Processing II: Acoustics
Processing III: Merging multimodal data
Movement annotation I: Preparing training data and data for classifier
Movement annotation II: Training movement classifier, and annotating timeseries data
Movement annotation III: Computing interrater agreement between manual and automatic annotation
Final merge: Merging timeseries with annotations
Computing concept similarity using ConceptNet word embeddings
Extraction of effort-related features
Exploratory Analysis I: Using PCA to identify effort dimensions
Exploratory Analysis II: Identifying effort-related features contributing to misunderstanding resolution
Statistical analysis: Modelling the effect of communicative attempt (H1) and answer similarity (H2) on effort
Prerequisites
If you wish to use only some steps of the pipeline, you will find the prerequisites and installation guide in the respective folder.
If you wish to run the entire pipeline, you can follow the steps below. Note that this project mostly in Python, but implements also some steps in R. Note that, for example, Visual Studio Code allows one to run both Python and R scripts. Additionally, the workflow also depends on some external softwares such as Praat, ELAN and EasyDIAG. Refer to the softwares' documentations for installation.
To prevent any conflicts in dependencies, we recommend to follow our workflow of creating three virtual environments, one for general processing steps, one for pose2sim and one for OpenSim scripting. In the following installation, we will setup environment for general processing steps, but you can find the installation instructions for the other two environments in their respective folders (02MotionTrackingprocessing).
```bash
1 - Clone the Repository
git clone https://github.com/sarkadava/FLESHEffort.git cd FLESHContinuousBodilyEffort
2 - Create a FLESH_TSPROCESS Conda Environment (Recommended)
conda create --name FLESHTSPROCESS python=3.12.2 conda activate FLESHTSPROCESS
3 - Install Dependencies
pip install -r requirements_tsprocess.txt
4 - Add Conda Environment to Jupyter Notebook
pip install ipykernel python -m ipykernel install --user --name=FLESHTSPROCESS --display-name "Python (FLESHTSPROCESS)"
5 - Run the Jupyter Notebook (Optional - You can also open the scripts in Visual Studio Code)
jupyter notebook ```
How to cite
If you want to use and cite and part of the coding pipeline, cite:
Kadav, ., wiek, A., & Pouw, W.. (2025). Coding pipeline to the project Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game (Version 1.0.0) [Computer software]. https://github.com/sarkadava/FLESH_Effort
If you want to cite the project, cite:
Kadav, ., Pouw, W., Fuchs, S., Holler, J., & Aleksandra , . (2025). Putting in the Effort: Modulation of Multimodal Effort in Communicative Breakdowns during a Gestural-Vocal Referential Game. OSF Registries. https://osf.io/8ajsg
Contact
kadava[at]leibniz-zas[dot]de (rka Kadav)
Owner
- Name: Šárka Kadavá
- Login: sarkadava
- Kind: user
- Website: https://sarkadava.github.io/
- Twitter: sarkadava
- Repositories: 1
- Profile: https://github.com/sarkadava
PhD researcher at DCC Nijmegen & ZAS Berlin
GitHub Events
Total
- Push event: 4
- Public event: 1
Last Year
- Push event: 4
- Public event: 1