nocaps-meta

Meta Learning for Novel Image Captioning

https://github.com/elipugh/nocaps-meta

Science Score: 18.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (1.9%) to scientific vocabulary

Keywords

maml nocaps updown
Last synced: 6 months ago · JSON representation ·

Repository

Meta Learning for Novel Image Captioning

Basic Info
  • Host: GitHub
  • Owner: elipugh
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 1.65 MB
Statistics
  • Stars: 1
  • Watchers: 2
  • Forks: 0
  • Open Issues: 2
  • Releases: 0
Topics
maml nocaps updown
Created over 5 years ago · Last pushed over 3 years ago
Metadata Files
Readme License Citation

README.md

Training UpDown Captioner for Novel Image Captioning with MAML

Find a colab for training this model here!

This is a class project for Stanford's CS330. Check out the final paper.

Owner

  • Name: Eli Pugh
  • Login: elipugh
  • Kind: user

Stanford Mathematics and Computer Science Microsoft Speech & Language

Citation (CITATION.md)

Citation
========

If you find this code useful, consider citing our `nocaps` paper:

```bibtex
@inproceedings{nocaps2019,
  author    = {Harsh Agrawal* and Karan Desai* and Yufei Wang and Xinlei Chen and Rishabh Jain and
             Mark Johnson and Dhruv Batra and Devi Parikh and Stefan Lee and Peter Anderson},
  title     = {{nocaps}: {n}ovel {o}bject {c}aptioning {a}t {s}cale},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year      = {2019}
}
```

As well as the paper that proposed this model: 

```bibtex
@inproceedings{Anderson2017up-down,
  author    = {Peter Anderson and Xiaodong He and Chris Buehler and Damien Teney and Mark Johnson
               and Stephen Gould and Lei Zhang},
  title     = {Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering},
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  year      = {2018}
}
```


If you evaluate your models on our `nocaps` benchmark, please consider citing
[EvalAI](https://evalai.cloudcv.org) — the platform which hosts our evaluation server:

```bibtex
@inproceedings{evalai,
    title   =  {EvalAI: Towards Better Evaluation Systems for AI Agents},
    author  =  {Deshraj Yadav and Rishabh Jain and Harsh Agrawal and Prithvijit
                Chattopadhyay and Taranjeet Singh and Akash Jain and Shiv Baran
                Singh and Stefan Lee and Dhruv Batra},
    booktitle = {Workshop on AI Systems at SOSP 2019}
    year    =  {2019},
}
```

GitHub Events

Total
Last Year

Dependencies

requirements.txt pypi
  • allennlp ==0.8.4
  • anytree ==2.6.0
  • cython ==0.29.1
  • evalai ==1.3.0
  • h5py ==2.8.0
  • mypy_extensions ==0.4.1
  • nltk ==3.4.5
  • numpy ==1.15.4
  • pillow ==8.1.1
  • tb-nightly *
  • tensorboardX ==1.7
  • torch ==1.1.0
  • tqdm ==4.28.1
  • yacs ==0.1.6