Mobile Gaze Mapping

Mobile Gaze Mapping: A Python package for mapping mobile gaze data to a fixed target stimulus - Published in JOSS (2018)

https://github.com/jeffmacinnes/mobilegazemapping

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 15 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 6 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: jeffmacinnes
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: master
  • Size: 14.9 MB
Statistics
  • Stars: 20
  • Watchers: 2
  • Forks: 7
  • Open Issues: 0
  • Releases: 4
Created over 7 years ago · Last pushed over 7 years ago
Metadata Files
Readme Changelog Contributing License

README.md

License Build Status DOI DOI

Mobile Gaze-Mapping

A Python package for mapping mobile gaze data to a fixed target stimulus.

Installation

The mapGaze tool has been built and tested using Python 3.6. To install required dependencies, navigate to the root of this repository and use:

pip install -r requirements.txt

Overview

Mobile eye-trackers allow for measures like gaze position to be recorded under naturalistic conditions where an individual is free to move around. Gaze position is typically recorded relative to an outward facing camera attached to the eye-tracker and approximating the point-of-view of the individual wearing the device. As such, gaze position is recorded relative to the individual's position and orientation, which changes as the participant moves. Since gaze position is recorded without any reference to fixed objects in the environment, this poses a challenge for studying how an individual views a particular stimulus over time.

This toolkit addresses this challenge by automatically identifying the target stimulus on every frame of the recording and mapping the gaze positions to a fixed representation of the stimulus. At the end, gaze positions across the entire recording are expressed in pixel coordinates of the fixed 2D target stimulus.

For more information about this method and examples of how it can be used to facilitate subsequent analysis, see Dynamic Gaze Mapping

Usage Guide

Relevant terms:

  • World Camera: The outward facing video camera attached to the eye-tracking glasses that records the participant's point-of-view.
  • World Camera Coordinate System: The 2D coordinate system of each frame from the world camera. Units are pixels and the origin is in the top-left
  • Reference Image: A high quality, 2D, digital representation of the target stimulus in the environment that the participant is looking at.
  • Reference Image Coordinate System: The 2D coordinate system of the reference image. Units are pixels and the origin is in the top-left.

Input Data

This tool works with mobile eye-tracking data from any manufacturer, provided the raw data has been preprocessed in a way that yields:

  • Gaze Data File Comma or tab separated text file (.csv or .tsv) that contains the recorded gaze data, with column headings labeled:

    • timestamp timestamp (ms) corresponding to each sample
    • frame_idx index (0-based) of the worldCameraVid frame corresponding to each sample
    • confidence confidence of the validity of each sample (0-1)
    • norm_pos_x normalized x position of gaze location (0-1). Normalized with respect to width of worldCameraVid
    • norm_pos_y - normalized y position of gaze location (0-1). Normalized with respect to height of worldCameraVid

    Example:

    | timestamp | frame_idx | confidence | norm_pos_x | norm_pos_y | |-----------|-----------|------------|------------|------------| | 3941.24 | 0 | 1.0 | 0.5098 | 0.0529 | | 3962.13 | 0 | 1.0 | 0.5104 | 0.0533 | | 3996.01 | 1 | 1.0 | 0.5117 | 0.0823 |

  • World Camera Video Recording .mp4 video file representing the world camera recording from the data collection period.

  • Reference Image Image file representing the target stimulus. Preferably med-to-high quality (>1000px on edge), and cropped to only include the stimulus itself. Gaze positions will be mapped to the pixel coordinates of this image.

Creating preprocessed input data

To assist in preprocessing your data, this repository includes tools that will preprocess raw data from a select number of mobile eye-tracking devices. You can find them in the preprocessing directory.

  • preprocessing/pl_preprocessing.py: Built and tested with Pupil Labs 120 Hz binocular moble eye-tracking glasses
  • preprocessing/smi_preprocessing.py: Built and tested with SMI ETG 2 mobile eye-tracking glasses
  • preprocessing/tobii_preprocessing.py: Built and tested with Tobii Pro Glasses 2

Given the ever-evolving way in which different mobile eye-tracking manufacturers record, store, and format raw data, we offer no support for these preprocessing tools, but instead offer them as a starting off point for designing your own customized preprocessing routines. Simply comfirm that your preprocessed data includes the files described above.

Running Gaze Mapping

To run the mapGaze.py tool, supply the following inputs

``` usage: mapGaze.py [-h] [-o OUTPUTDIR] gazeData worldCameraVid referenceImage

positional arguments: gazeData path to gaze data file worldCameraVid path to world camera video file referenceImage path to reference image file

optional arguments: -h, --help show this help message and exit -o OUTPUTDIR, --outputDir OUTPUTDIR output directory [default: create "mappedGazeOutput" dir in same directory as gazeData file]

```

Example:

python mapGaze.py myGazeFile.tsv myWorldCameraVid.mp4 myReferenceImage.jpg

Output Data

Unless you explicitly supply your own output directory, all of the output will be saved in a new directory named mappedGazeOutput found in the same directory that holds the input gazeData file.

The output files include:

  • world_gaze.m4v: world camera video with original gaze points overlaid
  • ref_gaze.m4v: reference image video with mapped gaze points overlaid
  • ref2world_mapping.m4v: world camera video with reference image projected and inserted into each frame.
  • gazeData_mapped.tsv: tab-separated data file with gaze data represented in both coordinate systems - the world camera video, and the reference image
  • mapGazeLog.log: Log file

Test Data

To test your installation, we have included preprocessed files from a brief 2-second recording, which can be found in the tests directory.

To test, navigate to the directory for this repository, and type:

python mapGaze.py tests/gazeData_world.tsv tests/worldCamera.mp4 tests/referenceImage.jpg

It should take ~1min to complete. Afterwards you will find all of the output files saved in tests/mappedGazeOutput:

. └── tests ├── gazeData_world.tsv ├── mappedGazeOuput │   ├── gazeData_mapped.tsv │   ├── mapGazeLog.log │   ├── ref2world_mapping.m4v │   ├── ref_gaze.m4v │   ├── testData_referenceImage.jpg │   └── world_gaze.m4v ├── referenceImage.jpg └── worldCamera.mp4

Citing

If you use this code in your work, you can cite the JOSS article at DOI

MacInnes et al., (2018). Mobile Gaze Mapping: A Python package for mapping mobile gaze data to a fixed target stimulus. Journal of Open Source Software, 3(31), 984, https://doi.org/10.21105/joss.00984

bibTex

@article{mobileGazeMapping2018, doi = {10.21105/joss.00984}, url = {https://doi.org/10.21105/joss.00984}, year = {2018}, month = {Nov}, publisher = {The Open Journal}, volume = {3}, number = {31}, pages = {984}, author = {Jeff J MacInnes and Shariq Iqbal and John Pearson and Elizabeth N Johnson}, title = {Mobile Gaze Mapping: A Python Package for Mapping Mobile Gaze Data to a Fixed Target Stimulus}, journal = {The Journal of Open Source Software} }

referenceImage.jpg copyright:

Jeff Sonhouse, Decompositioning, 2010. Mixed media on canvas. 82 x 76 1/4 inches (208.3 x 193.7 cm). Collection of the Nasher Museum. Museum purchase, 2010.15.1. © Jeff Sonhouse.

Owner

  • Name: Jeff MacInnes
  • Login: jeffmacinnes
  • Kind: user
  • Location: Seattle, WA

JOSS Publication

Mobile Gaze Mapping: A Python package for mapping mobile gaze data to a fixed target stimulus
Published
November 24, 2018
Volume 3, Issue 31, Page 984
Authors
Jeff J. MacInnes
Institute for Learning and Brain Sciences, University of Washington, Seattle, WA, Center for Cognitive Neuroscience, Duke University, Durham, NC
Shariq Iqbal
University of Southern California, Los Angeles, CA
John Pearson
Center for Cognitive Neuroscience, Duke University, Durham, NC
Elizabeth N. Johnson
Center for Cognitive Neuroscience, Duke University, Durham, NC, Wharton Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA
Editor
Christopher R. Madan ORCID
Tags
openCV computer vision eye tracking mobile eye tracking

GitHub Events

Total
  • Watch event: 1
  • Fork event: 2
Last Year
  • Watch event: 1
  • Fork event: 2

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 33
  • Total Committers: 3
  • Avg Commits per committer: 11.0
  • Development Distribution Score (DDS): 0.091
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Jeff MacInnes j****s@g****m 30
Nicholas Nadeau, P.Eng., AVS n****u 2
Christopher Madan c****n 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 12
  • Total pull requests: 3
  • Average time to close issues: 13 days
  • Average time to close pull requests: 17 days
  • Total issue authors: 2
  • Total pull request authors: 2
  • Average comments per issue: 1.17
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • engnadeau (11)
  • worldOfA (1)
Pull Request Authors
  • engnadeau (2)
  • cMadan (1)
Top Labels
Issue Labels
Pull Request Labels