https://github.com/daeh/computed-appraisals

Computed Appraisals Model. Code and data for the 2023 paper, "Emotion prediction as computation over a generative theory of mind"

https://github.com/daeh/computed-appraisals

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary

Keywords

affective-computing cognitive-science emotional-intelligence probabilistic-programming psychology
Last synced: 5 months ago · JSON representation

Repository

Computed Appraisals Model. Code and data for the 2023 paper, "Emotion prediction as computation over a generative theory of mind"

Basic Info
  • Host: GitHub
  • Owner: daeh
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 9.52 MB
Statistics
  • Stars: 12
  • Watchers: 1
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Topics
affective-computing cognitive-science emotional-intelligence probabilistic-programming psychology
Created almost 3 years ago · Last pushed over 2 years ago
Metadata Files
Readme

README.md

Emotion prediction as computation over a generative theory of mind

Computed Appraisals Model

ABSTRACT From sparse descriptions of events, observers can make systematic and nuanced predictions of what emotions the people involved will experience. We propose a formal model of emotion prediction in the context of a public high-stakes social dilemma. This model uses inverse planning to infer a person's beliefs and preferences, including social preferences for equity and for maintaining a good reputation. The model then combines these inferred mental contents with the event to compute 'appraisals': whether the situation conformed to the expectations and fulfilled the preferences. We learn functions mapping computed appraisals to emotion labels, allowing the model to match human observers' quantitative predictions of twenty emotions, including joy, relief, guilt, and envy. Model comparison indicates that inferred monetary preferences are not sufficient to explain observers' emotion predictions; inferred social preferences are factored into predictions for nearly every emotion. Human observers and the model both use minimal individualizing information to adjust predictions of how different people will respond to the same event. Thus, our framework integrates inverse planning, event appraisals, and emotion concepts in a single computational model to reverse-engineer people's intuitive theory of emotions.

Project information

This work is described in the open access paper.

The GitHub repository (https://github.com/daeh/computed-appraisals) provides all of the raw behavioral data, models, and analyses.

The OSF repository (https://osf.io/yhwqn) provides the cached model data and the behavioral paradigms used to collect the empirical data.

Citing this work

If you use this repository, the data it includes, or build on the models/analyses, please cite the paper (NB citation given in BibLaTex):

bibtex @article{houlihan2023computedappraisals, title = {Emotion Prediction as Computation over a Generative Theory of Mind}, author = {Houlihan, Sean Dae and Kleiman-Weiner, Max and Hewitt, Luke B. and Tenenbaum, Joshua B. and Saxe, Rebecca}, date = {2023}, journaltitle = {Philosophical Transactions of the Royal Society A}, shortjournal = {Phil. Trans. R. Soc. A}, volume = {381}, number = {2251}, pages = {20220047}, doi = {10.1098/rsta.2022.0047}, url = {https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2022.0047} }

Contents of the project

  • code - models and analyses
  • dataIn - raw behavioral data collected from human observers (see the empirical data documentation)
  • dataOut - cached model data (only on OSF)
  • paradigms - mTurk experiments used to collect the behavioral data in dataIn/ (only on OSF)

Running the Computed Appraisal Model

To run the computed appraisals model (cam), you can install the dependencies necessary to regenerate the results and figures using a (1) Docker container (2) conda environment or (3) pip specification.

NB Running this model from scratch is prohibitively compute-heavy outside of a High Performance Computing cluster. The model has been cached at various checkpoints to make it easy to explore the results on a personal computer. To make use of the cached model, download and uncompress dataOut-cached.zip. Then place the dataOut directory containing the cached *.pkl files in the local project folder that you clone/fork from this repository (e.g. computed-appraisals/dataOut/) .

1. Docker container

Requires Docker. The image includes WebPPL and TeX Live.

NB Docker is finicky about the cpu architecture. The example below builds an image optimized for arm64 processors. For an example of building an image for amd64 processors, see .devcontainer/Dockerfile.

```bash

Clone git repo to the current working directory

git clone --branch main https://github.com/daeh/computed-appraisals.git computed-appraisals

Enter the new directory

cd computed-appraisals

(optional but recommended)

Add the "dataOut" directory that you downloaded from OSF in order to use the cached model

Build Docker Image

docker build --tag camimage .

Run Container (optional to specify resources like memory and cpus)

docker run --rm --name=cam \ --memory 12GB --cpus 4 --platform=linux/arm64 \ --volume $(pwd)/:/projhost/ \ camimage /projhost/code/cam_main.py --projectdir /projhost/ ```

The container tag is arbitrary (you can replace camimage with a different label).

2. conda environment

Requires conda, conda-lock, and a local installation of TeX Live. If you want to run the inverse planning models, you need to have the WebPPL executable in your PATH with the webppl-json add-on.

The example below uses the conda-lock.yml file to create an environment where the package versions are pinned to this project's specifications, which is recommend for reproducibility. If the lock file cannot resolve the dependencies for your system, you can use the environment.yml file to create an environment with the latest package versions. Simply replace the conda-lock install ... line with conda env create -f environment.yml.

```bash

Clone git repo to the current working directory

git clone --branch main https://github.com/daeh/computed-appraisals.git computed-appraisals

Enter the new directory

cd computed-appraisals

(optional but recommended)

Add the "dataOut" directory that you downloaded from OSF in order to use the cached model

Create the conda environment

conda-lock install --name envcam conda-lock.yml

Activate the conda environment

conda activate envcam

Run the python code

python ./code/cam_main.py --projectdir $(pwd) ```

The conda environment name is arbitrary (you can replace envcam with a different label).

3. pip specification

If you use a strategy other than conda or Docker to manage python environments, you can install the dependencies using the requirements.txt file located in the root directory of the project. You need to have TeX Live installed locally. If you want to run the inverse planning models, you need to have the WebPPL executable in your PATH with the webppl-json add-on.

Note on PyTorch

Different hardware architectures lead to very small differences in floating point operations. In our experience, setting a random seed causes PyTorch to initialize at the same values, but update steps of the Adam optimizer exhibit minuscule differences depending on the platform (e.g. Intel Xeon E5 vs Intel Xeon Gold cores). As such, rerunning the PyTorch models may yield results that show small numerical differences from the cached data.

Authors

Owner

  • Name: Dae
  • Login: daeh
  • Kind: user
  • Location: Cambridge, MA
  • Company: MIT

Neukom Computational Science Postdoc Fellow at Dartmouth. PhD from MIT Brain and Cognitive Sciences.

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels