libauc

LibAUC: A Deep Learning Library for X-Risk Optimization

https://github.com/optimization-ai/libauc

Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 5 committers (20.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.4%) to scientific vocabulary

Keywords

auprc auroc contrastive-learning deep-learning machine-learning ndcg optimization pytorch ranking-algorithm self-supervised-learning
Last synced: 4 months ago · JSON representation ·

Repository

LibAUC: A Deep Learning Library for X-Risk Optimization

Basic Info
  • Host: GitHub
  • Owner: Optimization-AI
  • License: mit
  • Language: Python
  • Default Branch: 1.4.0
  • Homepage: https://libauc.org/
  • Size: 1.64 MB
Statistics
  • Stars: 307
  • Watchers: 5
  • Forks: 38
  • Open Issues: 0
  • Releases: 5
Topics
auprc auroc contrastive-learning deep-learning machine-learning ndcg optimization pytorch ranking-algorithm self-supervised-learning
Created over 4 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md


LibAUC: A Deep Learning Library for X-Risk Optimization

Pypi Downloads python PyTorch LICENSE

| Documentation | Installation | Website | Tutorial | Research | Github |

News

  • [8/14/2024]: New Version is Available: We are releasing LibAUC 1.4.0. We offer new optimizers/losses/models and have improved some existing optimizers. For more details, please check the latest release note.

  • [04/07/2024]: Bugs fixed: We fixed a bug in datasets/folder.py by returning a returnindex to support SogCLR/iSogCLR for contrastive learning. Fixed incorrect communication with allgather in GCLoss_v1 and set gamma to original value when u is not 0. None of these were in our experimental code of the paper.

  • [02/11/2024]: A Bug fixed: We fixed a bug in the calculation of AUCM loss and MultiLabelAUCM loss (the margin parameter is missed in the original calculation which might cause the loss to be negative). However, it does not affect the learning as the updates are not affected by this. Both the source code and pip install are updated.

  • [06/10/2023]: LibAUC 1.3.0 is now available! In this update, we have made improvements and introduced new features. We also release a new documentation website at https://docs.libauc.org/. Please see the release notes for details.

Why LibAUC?

LibAUC offers an easier way to directly optimize commonly-used performance measures and losses with user-friendly API. LibAUC has broad applications in AI for tackling many challenges, such as Classification of Imbalanced Data (CID), Learning to Rank (LTR), and Contrastive Learning of Representation (CLR). LibAUC provides a unified framework to abstract the optimization of many compositional loss functions, including surrogate losses for AUROC, AUPRC/AP, and partial AUROC that are suitable for CID, surrogate losses for NDCG, top-K NDCG, and listwise losses that are used in LTR, and global contrastive losses for CLR. Here’s an overview:

Installation

Installing from pip $ pip install -U libauc

Installing from source

$ git clone https://github.com/Optimization-AI/LibAUC.git $ cd LibAUC $ pip install .

Usage

Example training pipline for optimizing X-risk (e.g., AUROC)

```python

import our loss and optimizer

from libauc.losses import AUCMLoss from libauc.optimizers import PESG

pretraining your model through supervised learning or self-supervised learning

load a pretrained encoder and random initialize the last linear layer

define loss & optimizer

Loss = AUCMLoss() optimizer = PESG() ...

training

model.train()
for data, targets in trainloader: data, targets = data.cuda(), targets.cuda() logits = model(data) preds = torch.sigmoid(logits) loss = Loss(preds, targets) optimizer.zero_grad() loss.backward() optimizer.step() ...

update internal parameters

optimizer.update_regularizer() ```

Tutorials

X-Risk Minimization

Other Applications - [Constructing benchmark imbalanced datasets for CIFAR10, CIFAR100, CATvsDOG, STL10](https://github.com/Optimization-AI/LibAUC/blob/main/examples/01_Creating_Imbalanced_Benchmark_Datasets.ipynb) - [Using LibAUC with PyTorch learning rate scheduler](https://github.com/Optimization-AI/LibAUC/blob/main/examples/04_Training_with_Pytorch_Learning_Rate_Scheduling.ipynb) - [Optimizing AUROC loss on Chest X-Ray dataset (CheXpert)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/05_Optimizing_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb) - [Optimizing AUROC loss on Skin Cancer dataset (Melanoma)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/08_Optimizing_AUROC_Loss_with_DenseNet121_on_Melanoma.ipynb) - [Optimizing multi-label AUROC loss on Chest X-Ray dataset (CheXpert)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/07_Optimizing_Multi_Label_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb) - [Optimizing AUROC loss on Tabular dataset (Credit Fraud)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/12_Optimizing_AUROC_Loss_on_Tabular_Data.ipynb) - [Optimizing AUROC loss for Federated Learning](https://github.com/Optimization-AI/LibAUC/blob/main/examples/scripts/06_Optimizing_AUROC_loss_with_DenseNet121_on_CIFAR100_in_Federated_Setting_CODASCA.py) - [Optimizing GCLoss (Bimodal with Cosine Gamma)](https://docs.libauc.org/examples/sogclr_gamma.html)

Citation

If you find LibAUC useful in your work, please cite the following papers: @inproceedings{yuan2023libauc, title={LibAUC: A Deep Learning Library for X-Risk Optimization}, author={Zhuoning Yuan and Dixian Zhu and Zi-Hao Qiu and Gang Li and Xuanhui Wang and Tianbao Yang}, booktitle={29th SIGKDD Conference on Knowledge Discovery and Data Mining}, year={2023} } @article{yang2022algorithmic, title={Algorithmic Foundations of Empirical X-Risk Minimization}, author={Yang, Tianbao}, journal={arXiv preprint arXiv:2206.00439}, year={2022} }

Contact

For any technical questions, please open a new issue in the Github. If you have any other questions, please contact us via libaucx@gmail.com or tianbao-yang@tamu.edu.

Owner

  • Name: Optimization for Machine Learning and AI
  • Login: Optimization-AI
  • Kind: organization

OptMAI Lab at Texas A&M University directed by Professor Tianbao Yang

Citation (citations.bib)

@inproceedings{yuan2023libauc,
  title={LibAUC: A Deep Learning Library for X-Risk Optimization},
  author={Zhuoning Yuan and Dixian Zhu and Zi-Hao Qiu and Gang Li and Xuanhui Wang and Tianbao Yang},
  booktitle={29th SIGKDD Conference on Knowledge Discovery and Data Mining},
  year={2023}
}
  
@article{yang2022algorithmic,
  title={Algorithmic Foundation of Deep X-Risk Optimization},
  author={Yang, Tianbao},
  journal={arXiv preprint arXiv:2206.00439},
  year={2022}
}

@article{yang2022auc,
  title={AUC Maximization in the Era of Big Data and AI: A Survey},
  author={Yang, Tianbao and Ying, Yiming},
  journal={arXiv preprint arXiv:2203.15046},
  year={2022}
}

@article{yuan2022provable,
  title={Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance},
  author={Yuan, Zhuoning and Wu, Yuexin and Qiu, Zihao and Du, Xianzhi and Zhang, Lijun and Zhou, Denny and Yang, Tianbao},
  booktitle={International Conference on Machine Learning},
  year={2022},
  organization={PMLR}
}


@article{qiu2022large,
  title={Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence},
  author={Qiu, Zi-Hao and Hu, Quanqi and Zhong, Yongjian and Zhang, Lijun and Yang, Tianbao},
  booktitle={International Conference on Machine Learning},
  year={2022},
  organization={PMLR}
}

@article{zhu2022auc,
  title={When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee},
  author={Zhu, Dixian and Li, Gang and Wang, Bokun and Wu, Xiaodong and Yang, Tianbao},
  booktitle={International Conference on Machine Learning},
  year={2022},
  organization={PMLR}
}


@inproceedings{yuan2021compositional,
  title={Compositional Training for End-to-End Deep AUC Maximization},
  author={Yuan, Zhuoning and Guo, Zhishuai and Chawla, Nitesh and Yang, Tianbao},
  booktitle={International Conference on Learning Representations},
  year={2022},
  organization={PMLR}
}


@inproceedings{yuan2021federated,
  title={Federated deep AUC maximization for hetergeneous data with a constant communication complexity},
  author={Yuan, Zhuoning and Guo, Zhishuai and Xu, Yi and Ying, Yiming and Yang, Tianbao},
  booktitle={International Conference on Machine Learning},
  pages={12219--12229},
  year={2021},
  organization={PMLR}
}

@inproceedings{yuan2021large,
  title={Large-scale robust deep auc maximization: A new surrogate loss and empirical studies on medical image classification},
  author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={3040--3049},
  year={2021}
}
@article{qi2021stochastic,
  title={Stochastic optimization of areas under precision-recall curves with provable convergence},
  author={Qi, Qi and Luo, Youzhi and Xu, Zhao and Ji, Shuiwang and Yang, Tianbao},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={1752--1765},
  year={2021}
}

GitHub Events

Total
  • Issues event: 7
  • Watch event: 19
  • Issue comment event: 11
  • Fork event: 1
Last Year
  • Issues event: 7
  • Watch event: 19
  • Issue comment event: 11
  • Fork event: 1

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 147
  • Total Committers: 5
  • Avg Commits per committer: 29.4
  • Development Distribution Score (DDS): 0.034
Top Committers
Name Email Commits
Zhuoning Yuan 3****g@u****m 142
zhqiu 3****u@u****m 2
GangLii g****5@g****m 1
QiQi q****i@u****u 1
Dixian Zhu z****n@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 53
  • Total pull requests: 17
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 8 days
  • Total issue authors: 45
  • Total pull request authors: 6
  • Average comments per issue: 1.81
  • Average comments per pull request: 0.06
  • Merged pull requests: 11
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 7
  • Pull requests: 0
  • Average time to close issues: 9 days
  • Average time to close pull requests: N/A
  • Issue authors: 7
  • Pull request authors: 0
  • Average comments per issue: 1.57
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • berkuva (3)
  • optmai (3)
  • yanruiD (3)
  • SNEAndy (2)
  • rohan1561 (2)
  • zhang090210 (1)
  • CaptainSxy (1)
  • SaumyaBhandari (1)
  • seoulsky-field (1)
  • Evap6 (1)
  • StefanIsSmart (1)
  • mxadorable (1)
  • ayhyap (1)
  • BoredGeo (1)
  • RickeyBorges (1)
Pull Request Authors
  • PenGuln (12)
  • xywei00 (6)
  • optmai (4)
  • GangLii (2)
  • s-rog (2)
  • DixianZhu (2)
Top Labels
Issue Labels
enhancement (3) follow-up (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,575 last-month
  • Total docker downloads: 8
  • Total dependent packages: 0
  • Total dependent repositories: 3
  • Total versions: 25
  • Total maintainers: 2
pypi.org: libauc

LibAUC: A Deep Learning Library for X-Risk Optimization

  • Versions: 25
  • Dependent Packages: 0
  • Dependent Repositories: 3
  • Downloads: 1,575 Last month
  • Docker Downloads: 8
Rankings
Stargazers count: 4.2%
Docker downloads count: 4.2%
Forks count: 6.8%
Average: 6.9%
Downloads: 7.4%
Dependent repos count: 9.0%
Dependent packages count: 10.0%
Maintainers (2)
Last synced: 5 months ago