logocap

Offical code of LOGO-CAP (CVPR' 22). https://arxiv.org/abs/2109.03622

https://github.com/cherubicxn/logocap

Science Score: 28.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary

Keywords

cvpr2022 human-pose-estimation multiperson
Last synced: 4 months ago · JSON representation ·

Repository

Offical code of LOGO-CAP (CVPR' 22). https://arxiv.org/abs/2109.03622

Basic Info
  • Host: GitHub
  • Owner: cherubicXN
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 2.9 MB
Statistics
  • Stars: 33
  • Watchers: 3
  • Forks: 7
  • Open Issues: 1
  • Releases: 0
Topics
cvpr2022 human-pose-estimation multiperson
Created almost 4 years ago · Last pushed over 3 years ago
Metadata Files
Readme Citation

README.md

Learning Local-Global Contextual Adaptation for Multi-Person Pose Estimation (CVPR 2022)

This is the offical repo of our paper.

Abstract: This paper studies the problem of multi-person pose estimation in a bottom-up fashion. With a new and strong observation that the localization issue of the center-offset formulation can be remedied in a local-window search scheme in an ideal situation, we propose a multi-person pose estimation approach, dubbed as LOGO-CAP, by learning the LOcal-GlObal Contextual Adaptation for human Pose. Specifically, our approach learns the keypoint attraction maps (KAMs) from the local keypoints expansion maps (KEMs) in small local windows in the first step, which are subsequently treated as dynamic convolutional kernels on the keypoints-focused global heatmaps for contextual adaptation, achieving accurate multi-person pose estimation. Our method is end-to-end trainable with near real-time inference speed in a single forward pass, obtaining state-of-the-art performance on the COCO keypoint benchmark for bottom-up human pose estimation. With the COCO trained model, our method also outperforms prior arts by a large margin on the challenging OCHuman dataset.


Installation (tested on Ubuntu-18.04, CUDA 11.1, pytorch-lts)

conda env create -f environment.yml conda activate logocap

Data Preparation

Download COCO and OCHuman datasets into the data directory with the following structure: |-- data | |-- coco | | |-- annotations | | |-- images | |-- OCHuman | |-- annotations | |-- images

Model Weights for Training and Testing

Download the pretrain models of HRNet backbones for training and the trained models of our LOGO-CAP with HRNet backbones by the following scripts. cd weights sh download.sh cd ..

Quickstart

After you prepared the COCO dataset on your machine, you can run the following command lines to evaluate our models on the COCO-val-2017 dataset.

```

HRNet-W32 backbone

python tools/test.py --cfg experiments/logocap-hrnet-w32-coco.yaml --ckpt weights/logocap/logocap-hrnet-w32-coco.pth.tar

HRNet-W48 backbone

python tools/test.py --cfg experiments/logocap-hrnet-w48-coco.yaml --ckpt weights/logocap/logocap-hrnet-w48-coco.pth.tar ```

TODO List

  • Training scripts
  • README.md
  • Visualization code

Citations

If you find our work useful in your research, please consider citing: @inproceedings{LOGOCAP, title = "Learning Local-Global Contextual Adaptation for Multi-Person Pose Estimation", author = "Nan Xue and Tianfu Wu and Gui-Song Xia and Liangpei Zhang", booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", year = {2022}, }

Acknowledgements

Our code is based on DEKR. We thank Linxi Huan, Liang Dong, Fudong Wang and the anonymous reviewers for their helpful discussions and comments.

Owner

  • Name: Nan Xue
  • Login: cherubicXN
  • Kind: user
  • Location: Wuhan
  • Company: Wuhan University

Computer Vision Researcher

Citation (citation.bib)

@inproceedings{LOGOCAP,
title = "Learning Local-Global Contextual Adaptation for Multi-Person Pose Estimation",
author = "Nan Xue and Tianfu Wu and Gui-Song Xia and Liangpei Zhang",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
year = {2022},
}

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Dependencies

environment.yml pypi
  • charset-normalizer ==2.0.6
  • click ==8.0.1
  • colorcet ==2.0.6
  • configparser ==5.0.2
  • cycler ==0.10.0
  • cython ==0.29.24
  • docker-pycreds ==0.4.0
  • fvcore ==0.1.5.post20210924
  • gitdb ==4.0.7
  • gitpython ==3.1.24
  • idna ==3.2
  • json-tricks ==3.15.5
  • kiwisolver ==1.3.1
  • matplotlib ==3.4.2
  • nose ==1.3.7
  • opencv-python ==4.5.3.56
  • param ==1.12.0
  • pathtools ==0.1.2
  • promise ==2.3
  • protobuf ==3.18.0
  • psutil ==5.8.0
  • pycocotools ==2.0.2
  • pyct ==0.4.8
  • pyparsing ==2.4.7
  • python-dateutil ==2.8.2
  • pyyaml ==5.4.1
  • random-word ==1.0.7
  • requests ==2.26.0
  • sentry-sdk ==1.4.2
  • shortuuid ==1.0.1
  • smmap ==4.0.0
  • subprocess32 ==3.5.4
  • termcolor ==1.1.0
  • tqdm ==4.61.2
  • urllib3 ==1.26.7
  • wandb ==0.12.2
  • yacs ==0.1.8
  • yaspin ==2.1.0