https://github.com/deeprine/awesome-continual-learning
A paper list of our recent survey on continual learning, and other useful resources in this field.
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
✓DOI references
Found 55 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, sciencedirect.com, nature.com, ieee.org, acm.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (5.7%) to scientific vocabulary
Last synced: 4 months ago
·
JSON representation
Repository
A paper list of our recent survey on continual learning, and other useful resources in this field.
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Fork of lywang3081/Awesome-Continual-Learning
Created over 2 years ago
· Last pushed over 2 years ago
https://github.com/deeprine/Awesome-Continual-Learning/blob/main/
# Awesome-Continual-Learning This is the paper list of our paper ''A Comprehensive Survey of Continual Learning: Theory, Method and Application'' [[link](https://arxiv.org/abs/2302.00487)]. We will continue to add useful resources to this repo. ## Survey - **[2023 arXiv]** A Comprehensive Survey of Continual Learning: Theory, Method and Application [[paper](https://arxiv.org/abs/2302.00487)] - **[2023 arXiv]** A Survey on Incremental Update for Neural Recommender Systems [[paper](https://arxiv.org/abs/2303.02851#)] - **[2023 arXiv]** Deep Class-Incremental Learning: A Survey [[paper](https://arxiv.org/abs/2302.03648)] - **[2023 arXiv]** Towards Label-Efficient Incremental Learning A Survey [[paper](https://arxiv.org/abs/2302.00353)] - **[2022 arXiv]** Continual Learning of Natural Language Processing Tasks A Survey [[paper](https://arxiv.org/abs/2211.12701)] - **[2022 Trends in Neurosciences]** Contributions by metaplasticity to solving the Catastrophic Forgetting Problem [[paper](https://doi.org/10.1016/j.tins.2022.06.002)] - **[2022 TPAMI]** Class-incremental learning survey and performance evaluation on image classification [[paper](https://arxiv.org/abs/2010.15277)] - **[2022 NMI]** Biological underpinnings for lifelong learning machines [[paper](https://www.nature.com/articles/s42256-022-00452-0)] - **[2022 Neurocomputing]** Online Continual Learning in Image Classification: An Empirical Survey [[paper](https://arxiv.org/abs/2101.10423)] - **[2022 JAIR]** Towards Continual Reinforcement Learning [[paper](https://arxiv.org/abs/2012.13490)] - **[2021 arXiv]** Recent Advances of Continual Learning in Computer Vision: An Overview [[paper](https://arxiv.org/abs/2109.11369)] - **[2021 TPAMI]** A continual learning survey: Defying forgetting in classification tasks [[paper](https://arxiv.org/abs/1909.08383)] - **[2021 Neural Computation]** Replay in Deep Learning: Current Approaches and Missing Biological Elements [[paper](https://arxiv.org/abs/2104.04132)] - **[2020 Trends in Cognitive Sciences]** Embracing Change: Continual Learning in Deep Neural Networks [[paper](https://www.sciencedirect.com/science/article/pii/S1364661320302199)] - **[2020 TPAMI]** Class-incremental learning survey and performance evaluation on image classification [[paper](https://arxiv.org/abs/2010.15277)] - **[2020 COLING]** Continual Lifelong Learning in Natural Language Processing: A Survey [[paper](https://arxiv.org/abs/2012.09823)] - **[2019 Neural Networks]** Continual Lifelong Learning with Neural Networks: A Review [[paper](https://arxiv.org/abs/1802.07569)] ## Papers ### 2023 - **[2023 CVPR]** Dealing With Cross-Task Class Discrimination in Online Continual Learning [[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Guo_Dealing_With_Cross-Task_Class_Discrimination_in_Online_Continual_Learning_CVPR_2023_paper.html)][[code](https://github.com/gydpku/GSA)] - **[2023 CVPR]** Decoupling Learning and Remembering: A Bilevel Memory Framework With Knowledge Projection for Task-Incremental Learning [[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Sun_Decoupling_Learning_and_Remembering_A_Bilevel_Memory_Framework_With_Knowledge_CVPR_2023_paper.html)][[code](https://github.com/SunWenJu123/BMKP)] - **[2023 CVPR]** GKEAL: Gaussian Kernel Embedded Analytic Learning for Few-shot Class Incremental Task [[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhuang_GKEAL_Gaussian_Kernel_Embedded_Analytic_Learning_for_Few-Shot_Class_Incremental_CVPR_2023_paper.pdf)] - **[2023 CVPR]** EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization [[paper](https://arxiv.org/abs/2303.01904)] - **[2023 CVPR]** Endpoints Weight Fusion for Class Incremental Semantic Segmentation [[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Xiao_Endpoints_Weight_Fusion_for_Class_Incremental_Semantic_Segmentation_CVPR_2023_paper.pdf)] - **[2023 CVPR]** On the Stability-Plasticity Dilemma of Class-Incremental Learning [[paper](https://arxiv.org/pdf/2304.01663.pdf)] - **[2023 CVPR]** Regularizing Second-Order Influences for Continual Learning [[paper](https://arxiv.org/pdf/2304.10177.pdf)][[code](https://github.com/feifeiobama/InfluenceCL)] - **[2023 CVPR]** Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning [[paper](https://arxiv.org/pdf/2201.12559.pdf)] - **[2023 CVPR]** Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning [[paper](https://arxiv.org/pdf/2304.05288.pdf)] - **[2023 CVPR]** A Probabilistic Framework for Lifelong Test-Time Adaptation [[paper](https://arxiv.org/pdf/2212.09713.pdf)][[code](https://github.com/dhanajitb/petal)] - **[2023 CVPR]** Continual Semantic Segmentation with Automatic Memory Sample Selection [[paper](https://arxiv.org/pdf/2304.05015.pdf)] - **[2023 CVPR]** Exploring Data Geometry for Continual Learning [[paper](https://arxiv.org/pdf/2304.03931.pdf)] - **[2023 CVPR]** PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning [[paper](https://arxiv.org/pdf/2304.04408.pdf)][[code](https://github.com/FelixHuiweiLin/PCR)] - **[2023 CVPR]** Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/pdf/2304.00426.pdf)][[code](https://github.com/zysong0113/SAVC)] - **[2023 CVPR]** Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation [[paper](https://arxiv.org/pdf/2302.14250.pdf)] - **[2023 CVPR]** Continual Detection Transformer for Incremental Object Detection [[paper](https://arxiv.org/pdf/2304.03110.pdf)][[code](https://github.com/yaoyao-liu/CL-DETR)] - **[2023 CVPR]** PIVOT: Prompting for Video Continual Learning [[paper](https://arxiv.org/pdf/2212.04842.pdf)] - **[2023 CVPR]** CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning [[paper](https://arxiv.org/pdf/2211.13218.pdf)][[code](https://github.com/GT-RIPL/CODA-Prompt)] - **[2023 CVPR]** Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions [[paper](https://arxiv.org/pdf/2303.14115.pdf)] - **[2023 CVPR]** Class-Incremental Exemplar Compression for Class-Incremental Learning [[paper](https://arxiv.org/pdf/2303.14042.pdf)][[code](https://github.com/xfflzl/CIM-CIL)] - **[2023 CVPR]** Dense Network Expansion for Class Incremental Learning [[paper](https://arxiv.org/pdf/2303.12696.pdf)] - **[2023 ICLR]** Online Bias Correction for Task-Free Continual Learning [[paper]( https://openreview.net/pdf?id=18XzeuYZh_)] - **[2023 ICLR]** Sparse Distributed Memory is a Continual Learner [[paper]( https://openreview.net/pdf?id=JknGeelZJpHP)][[code](https://github.com/trentbrick/sdmcontinuallearner)] - **[2023 ICLR]** Continual Learning of Language Models [[paper]( https://openreview.net/pdf?id=m_GDIItaI3o)] - **[2023 ICLR]** Progressive Prompts: Continual Learning for Language Models without Forgetting [[paper]( https://openreview.net/pdf?id=UJTgQBc91_)][[code](https://github.com/arazd/ProgressivePrompts)] - **[2023 ICLR]** Is Forgetting Less a Good Inductive Bias for Forward Transfer? [[paper]( https://openreview.net/pdf?id=dL35lx-mTEs)] - **[2023 ICLR]** Online Boundary-Free Continual Learning by Scheduled Data Prior [[paper]( https://openreview.net/pdf?id=qco4ekz2Epm)] - **[2023 ICLR]** Incremental Learning of Structured Memory via Closed-Loop Transcription [[paper]( https://openreview.net/pdf?id=XrgjF5-M3xi)][[code](https://github.com/tsb0601/i-ctrl)] - **[2023 ICLR]** Better Generative Replay for Continual Federated Learning [[paper]( https://openreview.net/pdf?id=cRxYWKiTan)] - **[2023 ICLR]** 3EF: Class-Incremental Learning via Efficient Energy-Based Expansion and Fusion [[paper]( https://openreview.net/pdf?id=iP77_axu0h3)] - **[2023 ICLR]** Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning [[paper]( https://openreview.net/pdf?id=zJXg_Wmob03)] - **[2023 ICLR]** Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting [[paper]( https://openreview.net/pdf?id=gfPUokHsW-)] - **[2023 ICLR]** Building a Subspace of Policies for Scalable Continual Learning [[paper]( https://openreview.net/pdf?id=UKr0MwZM6fL)][[code](https://github.com/facebookresearch/salina)] - **[2023 ICLR]** A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning [[paper]( https://openreview.net/pdf?id=S07feAlQHgM)][[code1](https://github.com/zhoudw-zdw/cil_survey)/[code2](https://github.com/wangkiw/iclr23-memo)] - **[2023 ICLR]** Continual evaluation for lifelong learning: Identifying the stability gap [[paper]( https://openreview.net/pdf?id=Zy350cRstc6)][[code](https://github.com/mattdl/continualevaluation)] - **[2023 ICLR]** Continual Unsupervised Disentangling of Self-Organizing Representations [[paper]( https://openreview.net/pdf?id=ih0uFRFhaZZ)] - **[2023 ICLR]** Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning [[paper]( https://openreview.net/pdf?id=kPLzOfPfA2l)] - **[2023 ICLR]** Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning [[paper]( https://openreview.net/pdf?id=y5W8tpojhtJ)][[code](https://github.com/NeuralCollapseApplications/FSCIL)] - **[2023 ICLR]** On the Soft-Subnetwork for Few-Shot Class Incremental Learning [[paper]( https://openreview.net/pdf?id=z57WK5lGeHd)][[code](https://github.com/ihaeyong/softnet-fscil)] - **[2023 ICLR]** Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning [[paper]( https://openreview.net/pdf?id=zlbci7019Z3)][[code](https://github.com/neurai-lab/esmer)] - **[2023 ICLR]** Task-Aware Information Routing from Common Representation Space in Lifelong Learning [[paper](https://arxiv.org/abs/2302.11346)][[code](https://github.com/neurai-lab/tamil)][[code](https://github.com/neurai-lab/tamil)] ### 2022 - **[2022 WACV]** Online Continual Learning Via Candidates Voting [[paper](https://arxiv.org/abs/2110.08855v1)] - **[2022 WACV]** Knowledge Capture and Replay for Continual Learning [[paper](https://arxiv.org/abs/2012.06789)] - **[2022 WACV]** FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning [[paper](https://arxiv.org/abs/2211.13131)][[code](https://github.com/gregoirepetit/fetril)] - **[2022 WACV]** Dataset Knowledge Transfer for Class-Incremental Learning without Memory [[paper](https://arxiv.org/abs/2110.08421)][[code](https://github.com/habibslim/dkt-for-cil)] - **[2022 TPAMI]** Uncertainty-aware Contrastive Distillation for Incremental Semantic Segmentation [[paper](https://arxiv.org/abs/2203.14098)][[code](https://github.com/ygjwd12345/UCD)] - **[2022 TPAMI]** MgSvF: Multi-Grained Slow vs. Fast Framework for Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/abs/2006.15524)] - **[2022 TPAMI]** Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [[paper](https://arxiv.org/abs/2203.17030)][[code](https://github.com/zhoudw-zdw/TPAMI-Limit)] - **[2022 TPAMI]** Class-Incremental Continual Learning into the eXtended DER-verse [[paper](https://arxiv.org/abs/2201.00766)][[code](https://github.com/aimagelab/mammoth)] - **[2022 TNNLS]** Self-Training for Class-Incremental Semantic Segmentation [[paper](https://arxiv.org/abs/2012.03362)] - **[2022 PRL]** Continual Semi-Supervised Learning through Contrastive Interpolation Consistency [[paper](https://arxiv.org/abs/2108.06552)][[code](https://github.com/loribonna/cssl)] - **[2022 NeurIPS]** Task-Free Continual Learning via Online Discrepancy Distance Learning [[paper](https://arxiv.org/abs/2210.06579)] - **[2022 NeurIPS]** SparCL Sparse Continual Learning on the Edge [[paper](https://arxiv.org/abs/2209.09476)][[code](https://github.com/neu-spiral/SparCL)] - **[2022 NeurIPS]** S-Prompts Learning with Pre-trained Transformers An Occams Razor for Domain Incremental Learning [[paper](https://arxiv.org/abs/2207.12819)][[code1](https://github.com/g-u-n/pycil)/[code2](https://github.com/iamwangyabin/s-prompts)] - **[2022 NeurIPS]** Retrospective Adversarial Replay for Continual Learning [[paper](https://openreview.net/forum?id=XEoih0EwCwL)] - **[2022 NeurIPS]** Repeated Augmented Rehearsal: A Simple but Strong Baseline for Online Continual Learning [[paper](https://arxiv.org/abs/2209.13917)][[code](https://github.com/yaqianzhang/repeatedaugmentedrehearsal)] - **[2022 NeurIPS]** On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning [[paper](https://arxiv.org/abs/2210.06443)][[code](https://github.com/aimagelab/lider)] - **[2022 NeurIPS]** On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting [[paper](https://arxiv.org/abs/2206.00761)][[code](https://github.com/naver/gdc)] - **[2022 NeurIPS]** Memory Efficient Continual Learning with Transformers [[paper](https://arxiv.org/abs/2203.04640)] - **[2022 NeurIPS]** Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation [[paper](https://arxiv.org/abs/2210.04524)][[code](https://github.com/zoilsen/clom)] - **[2022 NeurIPS]** Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting [[paper](https://arxiv.org/abs/1905.10696)] - **[2022 NeurIPS]** How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning [[paper](https://openreview.net/forum?id=c0l2YolqD2T)][[code](https://github.com/neuroailab/VisualLearningBenchmarks)] - **[2022 NeurIPS]** Few-Shot Continual Active Learning by a Robot [[paper](https://arxiv.org/abs/2210.04137)] - **[2022 NeurIPS]** Exploring Example Influence in Continual Learning [[paper](https://arxiv.org/abs/2209.12241)][[code](https://github.com/sssunqing/example_influence_cl)] - **[2022 NeurIPS]** Disentangling Transfer in Continual Reinforcement Learning [[paper](https://arxiv.org/abs/2209.13900)] - **[2022 NeurIPS]** Continual Learning In Environments With Polynomial Mixing Times [[paper](https://arxiv.org/abs/2112.07066)][[code](https://github.com/sharathraparthy/polynomial-mixing-times)] - **[2022 NeurIPS]** Continual learning a feature extraction formalization, an efficient algorithm, and fundamental obstructions [[paper](https://arxiv.org/abs/2203.14383)] - **[2022 NeurIPS]** CLiMB A Continual Learning Benchmark for Vision-and-Language Tasks [[paper](https://arxiv.org/abs/2206.09059)][[code](https://github.com/glamor-usc/climb)] - **[2022 NeurIPS]** CGLB Benchmark Tasks for Continual Graph Learning [[paper](https://openreview.net/forum?id=5wNiiIDynDF)] - **[2022 NeurIPS]** Beyond Not-Forgetting Continual Learning with Backward Knowledge Transfer [[paper](https://arxiv.org/abs/2211.00789)] - **[2022 NeurIPS]** ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation [[paper](https://arxiv.org/abs/2210.06816)] - **[2022 NeurIPS]** A Theoretical Study on Solving Continual Learning [[paper](https://arxiv.org/abs/2211.02633)][[code](https://github.com/k-gyuhak/wptp)] - **[2022 NeurIPSW]** A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning [[paper](https://arxiv.org/abs/2210.04428)][[code](https://github.com/pauljanson002/pretrained-cl)] - **[2022 Neural Networks]** Efficient Perturbation Inference and Expandable Network for Continual Learning [[paper](https://www.sciencedirect.com/science/article/abs/pii/S0893608022004269)] - **[2022 NAACL]** Overcoming Catastrophic Forgetting During Domain Adaptation of Seq2seq Language Generation [[paper](https://aclanthology.org/2022.naacl-main.398.pdf)] - **[2022 MM]** Semantics-Driven Generative Replay for Few-Shot Class Incremental Learning [[paper](https://doi.org/10.1145/3503161.3548160)] - **[2022 MM]** Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update and Hyper-class Representation [[paper](https://arxiv.org/abs/2207.12964)] - **[2022 MM]** Class Gradient Projection For Continual Learning [[paper](https://doi.org/10.1145/3503161.3548054)] - **[2022 IJCAI]** Learning from Students: Online Contrastive Distillation Network for General Continual Learning [[paper](https://doi.org/10.24963/ijcai.2022/446)] - **[2022 IJCAI]** DyGRAIN: An Incremental Learning Framework for Dynamic Graphs [[paper](https://doi.org/10.24963/ijcai.2022/438)] - **[2022 IJCAI]** Continual Semantic Segmentation Leveraging Image-level Labels and Rehearsal [[paper](https://doi.org/10.24963/ijcai.2022/177)] - **[2022 IJCAI]** Continual Federated Learning Based on Knowledge Distillation [[paper](https://doi.org/10.24963/ijcai.2022/303)] - **[2022 IJCAI]** CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation [[paper](https://doi.org/10.24963/ijcai.2022/329)][[code](https://github.com/microsoft/pycodegpt)] - **[2022 ICPR]** Effects of Auxiliary Knowledge on Continual Learning [[paper](https://arxiv.org/abs/2206.02577v1)][[code](https://github.com/aimagelab/mammoth)] - **[2022 ICML]** Wide Neural Networks Forget Less Catastrophically [[paper](https://arxiv.org/abs/2110.11526)] - **[2022 ICML]** VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty [[paper](https://proceedings.mlr.press/v162/ardywibowo22a.html)] - **[2022 ICML]** Proving Theorems using Incremental Learning and Hindsight Experience Replay [[paper](https://arxiv.org/abs/2112.10664)] - **[2022 ICML]** Online Continual Learning through Mutual Information Maximization [[paper](https://proceedings.mlr.press/v162/guo22g.html)] - **[2022 ICML]** NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks [[paper](https://arxiv.org/abs/2206.09117)][[code](https://github.com/burakgurbuz97/nispa)] - **[2022 ICML]** Improving Task-free Continual Learning by Distributionally Robust Memory Evolution [[paper](https://arxiv.org/abs/2207.07256)][[code](https://github.com/joey-wang123/DRO-Task-free)] - **[2022 ICML]** Forget-free Continual Learning with Winning Subnetworks [[paper](https://arxiv.org/abs/2303.14962)] - **[2022 ICML]** Continual Learning with Guarantees via Weight Interval Constraints [[paper](https://arxiv.org/abs/2206.07996)][[code](https://github.com/gmum/intercontinet)] - **[2022 ICML]** Continual Learning via Sequential Function-Space Variational Inference [[paper](https://proceedings.mlr.press/v162/rudner22a.html)] - **[2022 ICLR]** TRGP: Trust Region Gradient Projection for Continual Learning [[paper](https://arxiv.org/abs/2202.02931)][[code](https://github.com/LYang-666/TRGP)] - **[2022 ICLR]** Towards Continual Knowledge Learning of Language Models [[paper](https://arxiv.org/abs/2110.03215)][[code](https://github.com/joeljang/continual-knowledge-learning)] - **[2022 ICLR]** Subspace Regularizers for Few-Shot Class Incremental Learning [[paper](https://arxiv.org/abs/2110.07059)][[code](https://github.com/feyzaakyurek/subspace-reg)] - **[2022 ICLR]** Representational Continuity for Unsupervised Continual Learning [[paper](https://arxiv.org/abs/2110.06976)][[code1](https://github.com/aimagelab/mammoth)/[code2](https://github.com/divyam3897/ucl)] - **[2022 ICLR]** Pretrained Language Model in Continual Learning: A Comparative Study [[paper](https://openreview.net/forum?id=figzpGMrdD)] - **[2022 ICLR]** Online Coreset Selection for Rehearsal-based Continual Learning [[paper](https://arxiv.org/abs/2106.01085)] - **[2022 ICLR]** Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference [[paper](https://arxiv.org/abs/2110.10031)][[code](https://github.com/naver-ai/i-blurry)] - **[2022 ICLR]** New Insights on Reducing Abrupt Representation Change in Online Continual Learning [[paper](https://arxiv.org/abs/2104.05025)][[code1](https://github.com/aimagelab/mammoth)/[code2](https://github.com/pclucas14/aml)] - **[2022 ICLR]** Model Zoo: A Growing Brain That Learns Continually [[paper](https://arxiv.org/abs/2106.03027)][[code](https://github.com/grasp-lyrl/modelzoo_continual)] - **[2022 ICLR]** Memory Replay with Data Compression for Continual Learning [[paper](https://arxiv.org/abs/2202.06592)][[code](https://github.com/lywang3081/MRDC)] - **[2022 ICLR]** Looking Back on Learned Experiences For Class/task Incremental Learning [[paper](https://openreview.net/forum?id=RxplU3vmBx)] - **[2022 ICLR]** LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5 [[paper](https://arxiv.org/abs/2110.07298)][[code](https://github.com/qcwthu/lifelong-fewshot-language-learning)] - **[2022 ICLR]** Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System [[paper](https://arxiv.org/abs/2201.12604)][[code](https://github.com/NeurAI-Lab/CLS-ER)] - **[2022 ICLR]** Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting [[paper](https://arxiv.org/abs/2112.01653)] - **[2022 ICLR]** Information-theoretic Online Memory Selection for Continual Learning [[paper](https://arxiv.org/abs/2204.04763)] - **[2022 ICLR]** How Well Does Self-Supervised Pre-Training Perform with Streaming Data? [[paper](https://arxiv.org/abs/2104.12081)] - **[2022 ICLR]** Effect of Scale on Catastrophic Forgetting in Neural Networks [[paper](https://openreview.net/forum?id=GhVS8:yPeEa)] - **[2022 ICLR]** Continual Normalization: Rethinking Batch Normalization for Online Continual Learning [[paper](https://arxiv.org/abs/2203.16102)][[code](https://github.com/phquang/continual-normalization)] - **[2022 ICLR]** Continual Learning with Recursive Gradient Optimization [[paper](https://arxiv.org/pdf/2201.12522.pdf)] - **[2022 ICLR]** Continual Learning with Filter Atom Swapping [[paper](https://openreview.net/forum?id=metRpM4Zrcb)][[code](https://github.com/ZichenMiao/CL_Atom_Swapping)] - **[2022 ICLR]** CoMPS: Continual Meta Policy Search [[paper](https://arxiv.org/abs/2112.04467)] - **[2022 ICLR]** CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability [[paper](https://arxiv.org/abs/2110.03331)][[code](https://github.com/ml-research/CLEVA-Compass)] - **[2022 EMNLP]** Continual Training of Language Models for Few-Shot Learning [[paper](https://arxiv.org/abs/2210.05549)][[code](https://github.com/uic-liu-lab/cpt)] - **[2022 ECCV]** Transfer without Forgetting [[paper](https://arxiv.org/abs/2206.00388)][[code](https://github.com/mbosc/twf)] - **[2022 ECCV]** The Challenges of Continuous Self-Supervised Learning [[paper](https://arxiv.org/abs/2203.12710)] - **[2022 ECCV]** S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/abs/2307.02246)][[code](https://github.com/jayatejak/s3c)] - **[2022 ECCV]** R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning [[paper](https://arxiv.org/abs/2203.13104)][[code](https://github.com/jianzhangcs/r-dfcil)] - **[2022 ECCV]** Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation [[paper](https://arxiv.org/abs/2207.10856)][[code](https://github.com/hongbin98/proca)] - **[2022 ECCV]** Online Task-free Continual Learning with Dynamic Sparse Distributed Memory [[paper](https://doi.org/10.1007/978-3-031-19806-9:42)] - **[2022 ECCV]** Online Continual Learning with Contrastive Vision Transformer [[paper](https://arxiv.org/abs/2207.13516)] - **[2022 ECCV]** Novel Class Discovery without Forgetting [[paper](http://arxiv.org/abs/2207.10659)] - **[2022 ECCV]** Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions [[paper](https://arxiv.org/abs/2209.01501)] - **[2022 ECCV]** Long-Tailed Class Incremental Learning [[paper](https://arxiv.org/abs/2210.00266)][[code](https://github.com/xialeiliu/long-tailed-cil)] - **[2022 ECCV]** Learning with Recoverable Forgetting [[paper](https://arxiv.org/abs/2207.08224)] - **[2022 ECCV]** Incremental Task Learning with Incremental Rank Updates [[paper](https://arxiv.org/abs/2207.09074)] - **[2022 ECCV]** incDFM: Incremental Deep Feature Modeling for Continual Novelty Detection [[paper](https://doi.org/10.1007/978-3-031-19806-9:34)] - **[2022 ECCV]** Helpful or Harmful Inter-Task Association in Continual Learning [[paper](https://doi.org/10.1007/978-3-031-20083-0:31)] - **[2022 ECCV]** Generative Negative Text Replay for Continual Vision-Language Pretraining [[paper](https://arxiv.org/abs/2210.17322)] - **[2022 ECCV]** FOSTER: Feature Boosting and Compression for Class-Incremental Learning [[paper](https://arxiv.org/abs/2204.04662)][[code](https://github.com/G-U-N/ECCV22-FOSTER)] - **[2022 ECCV]** Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay [[paper](https://arxiv.org/abs/2207.11213)][[code](https://github.com/liuh127/FSCIL-via-Entropy-regularized-DF-Replay)] - **[2022 ECCV]** Few-Shot Class-Incremental Learning from an Open-Set Perspective [[paper](https://arxiv.org/abs/2208.00147)][[code](https://github.com/canpeng123/fscil_alice)] - **[2022 ECCV]** DualPrompt Complementary Prompting for Rehearsal-free Continual Learning [[paper](https://arxiv.org/abs/2204.04799)][[code](https://github.com/google-research/l2p)] - **[2022 ECCV]** DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning [[paper](https://arxiv.org/abs/2208.08112)] - **[2022 ECCV]** CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One [[paper](https://arxiv.org/abs/2207.06543)][[code](https://github.com/lywang3081/coscl)] - **[2022 ECCV]** Class-incremental Novel Class Discovery [[paper](https://arxiv.org/abs/2207.08605)][[code](https://github.com/oatmealliu/class-incd)] - **[2022 ECCV]** Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer [[paper](https://arxiv.org/abs/2208.03767)][[code](https://github.com/ashok-arjun/CSCCT)] - **[2022 ECCV]** Balancing Stability and Plasticity through Advanced Null Space in Continual Learning [[paper](https://arxiv.org/abs/2207.12061)] - **[2022 ECCV]** Balancing between Forgetting and Acquisition in Incremental Subpopulation Learning [[paper](https://doi.org/10.1007/978-3-031-19809-0:21)] - **[2022 ECCV]** Anti-Retroactive Interference for Lifelong Learning [[paper](https://arxiv.org/abs/2208.12967)][[code](https://github.com/bhrqw/ari)] - **[2022 CVPR]** vCLIMB: A Novel Video Class Incremental Learning Benchmark [[paper](https://arxiv.org/abs/2201.09381)] - **[2022 CVPR]** Towards Better Plasticity-Stability Trade-off in Incremental Learning A Simple Linear Connector [[paper](https://arxiv.org/abs/2110.07905)] - **[2022 CVPR]** Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning [[paper](https://arxiv.org/abs/2203.06359)] - **[2022 CVPR]** Self-Supervised Models are Continual Learners [[paper](https://arxiv.org/abs/2112.04215)][[code](https://github.com/donkeyshot21/cassle)] - **[2022 CVPR]** Representation Compensation Networks for Continual Semantic Segmentation [[paper](https://arxiv.org/abs/2203.05402)][[code](https://github.com/zhangchbin/rcil)] - **[2022 CVPR]** Probing Representation Forgetting in Supervised and Unsupervised Continual Learning [[paper](https://arxiv.org/abs/2203.13381)] - **[2022 CVPR]** Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation [[paper](https://arxiv.org/abs/2204.02136)][[code](https://github.com/hi-ft/erd)] - **[2022 CVPR]** Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries [[paper](https://arxiv.org/abs/2203.15355)][[code](https://github.com/clovaai/puridiver)] - **[2022 CVPR]** On Generalizing Beyond Domains in Cross-Domain Continual Learning [[paper](https://arxiv.org/abs/2203.03970)] - **[2022 CVPR]** Not Just Selection, but Exploration Online Class-Incremental Continual Learning via Dual View Consistency [[paper](https://ieeexplore.ieee.org/abstract/document/9879220)][[code](https://github.com/yanangu/dvc)] - **[2022 CVPR]** Mimicking the Oracle An Initial Phase Decorrelation Approach for Class Incremental Learning [[paper](https://arxiv.org/abs/2112.04731)][[code](https://github.com/yujun-shi/cwd)] - **[2022 CVPR]** MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9878925)] - **[2022 CVPR]** Meta-attention for ViT-backed Continual Learning [[paper](https://arxiv.org/abs/2203.11684)][[code](https://github.com/zju-vipa/meat-til)] - **[2022 CVPR]** Lifelong Graph Learning [[paper](https://arxiv.org/abs/2202.10688)][[code1](https://github.com/sair-lab/LGL)/[code2](https://github.com/wang-chen/lgl-action-recognition)] - **[2022 CVPR]** Learning to Prompt for Continual Learning [[paper](https://arxiv.org/abs/2112.08654)][[code](https://github.com/google-research/l2p)] - **[2022 CVPR]** Learning to Imagine Diversify Memory for Incremental Learning using Unlabeled Data [[paper](https://arxiv.org/abs/2204.08932)][[code](https://github.com/TOM-tym/Learn-to-Imagine)] - **[2022 CVPR]** Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning [[paper](https://arxiv.org/abs/2202.10203)] - **[2022 CVPR]** Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding [[paper](https://arxiv.org/abs/2203.00867)][[code](https://github.com/dqiaole/zits_inpainting)] - **[2022 CVPR]** Incremental Learning in Semantic Segmentation from Image Labels [[paper](https://arxiv.org/pdf/2112.01882.pdf)][[code](https://github.com/fcdl94/wilson)] - **[2022 CVPR]** General Incremental Learning with Domain-aware Categorical Representations [[paper](https://arxiv.org/abs/2204.04078)] - **[2022 CVPR]** GCR: Gradient Coreset Based Replay Buffer Selection For Continual Learning [[paper](https://arxiv.org/abs/2111.11210)] - **[2022 CVPR]** Forward Compatible Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/abs/2203.06953)][[code](https://github.com/zhoudw-zdw/cvpr22-fact)] - **[2022 CVPR]** Few-Shot Incremental Learning for Label-to-Image Translation [[paper](https://ieeexplore.ieee.org/document/9878463)] - **[2022 CVPR]** Federated Class-Incremental Learning [[paper](https://arxiv.org/abs/2203.11473)][[code](https://github.com/conditionwang/fcil)] - **[2022 CVPR]** Energy-based Latent Aligner for Incremental Learning [[paper](https://arxiv.org/abs/2203.14952)][[code](https://github.com/josephkj/eli)] - **[2022 CVPR]** DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion [[paper](https://arxiv.org/abs/2111.11326)][[code](https://github.com/arthurdouillard/dytox)] - **[2022 CVPR]** Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches [[paper](https://arxiv.org/abs/2203.14843)] - **[2022 CVPR]** Continual Learning with Lifelong Vision Transformer [[paper](https://ieeexplore.ieee.org/document/9880356)] - **[2022 CVPR]** Continual Learning for Visual Search with Backward Consistent Feature Embedding [[paper](https://arxiv.org/abs/2205.13384)][[code](https://github.com/ivclab/cvs)] - **[2022 CVPR]** Constrained Few-shot Class-incremental Learning [[paper](https://arxiv.org/abs/2203.16588)][[code](https://github.com/ibm/constrained-fscil)] - **[2022 CVPR]** Class-Incremental Learning with Strong Pre-trained Models [[paper](https://arxiv.org/abs/2204.03634)][[code](https://github.com/amazon-science/sp-cil)] - **[2022 CVPR]** Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation [[paper](https://arxiv.org/abs/2204.00895)][[code](https://github.com/kminsoo/afc)] - **[2022 CVPR]** Bring Evanescent Representations to Life in Lifelong Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9878745)] - **[2022 CVIU]** Balanced softmax cross-entropy for incremental learning with and without memory [[paper](https://arxiv.org/abs/2103.12532)] - **[2022 COLING]** Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection [[paper](https://arxiv.org/abs/2204.07275)][[code](https://github.com/vt-nlp/incremental_prompting)] - **[2022 COLING]** Dynamic Dialogue Policy for Continual Reinforcement Learning [[paper](https://arxiv.org/abs/2204.05928)] - **[2022 COLING]** Continual Few-shot Intent Detection [[paper](https://aclanthology.org/2022.coling-1.26/)] - **[2022 ACL]** Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation [[paper](https://arxiv.org/abs/2203.03910)][[code](https://github.com/ictnlp/cokd)] - **[2022 ACL]** Few-Shot Class-Incremental Learning for Named Entity Recognition [[paper](https://aclanthology.org/2022.acl-long.43/)] - **[2022 ACL]** Continual Sequence Generation with Adaptive Compositional Modules [[paper](https://arxiv.org/abs/2203.10652)][[code](https://github.com/SALT-NLP/Adaptive-Compositional-Modules)] - **[2022 ACL]** Continual Prompt Tuning for Dialog State Tracking [[paper](https://arxiv.org/abs/2203.06654)][[code](https://github.com/thu-coai/cpt4dst)] - **[2022 ACL]** Continual Pre-training of Language Models for Math Problem Understanding with Syntax-Aware Memory Network [[paper](https://aclanthology.org/2022.acl-long.408/)][[code](https://github.com/rucaibox/comus)] - **[2022 ACL]** Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation [[paper](https://arxiv.org/abs/2203.02135)][[code](https://github.com/qcwthu/continual_fewshot_relation_learning)] - **[2022 ACL]** ConTinTin: Continual Learning from Task Instructions [[paper](https://arxiv.org/abs/2203.08512)] - **[2022 AAAI]** Static-Dynamic Co-teaching for Class-Incremental 3D Object Detection [[paper](https://arxiv.org/abs/2112.07241)] - **[2022 AAAI]** Same State, Different Task: Continual Reinforcement Learning without Interference [[paper](https://arxiv.org/abs/2106.02940)][[code](https://github.com/skezle/owl)] - **[2022 AAAI]** Learngene: From Open-World to Your Learning Task [[paper](https://arxiv.org/abs/2106.06788)][[code](https://github.com/BruceQFWang/learngene)] - **[2022 AAAI]** Continual Learning through Retrieval and Imagination [[paper](https://doi.org/10.1609/aaai.v36i8.20837)] - **[2022 AAAI]** Adaptive Orthogonal Projection for Batch and Online Continual Learning [[paper](https://doi.org/10.1609/aaai.v36i6.20634)] ### 2021 - **[2021 arXiv]** SPeCiaL: Self-Supervised Pretraining for Continual Learning [[paper](https://arxiv.org/abs/2106.09065)] - **[2021 arXiv]** An Empirical Investigation of the Role of Pre-training in Lifelong Learning [[paper](https://arxiv.org/abs/2112.09153)][[code](https://github.com/sanketvmehta/lifelong-learning-pretraining-and-sam)] - **[2021 WACV]** Do not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting [[paper](https://arxiv.org/abs/2102.01906)] - **[2021 TPAMI]** Incremental Object Detection via Meta-Learning [[paper](https://arxiv.org/abs/2003.08798)][[code](https://github.com/JosephKJ/iOD)] - **[2021 TNNLS]** Triple-Memory Networks: A Brain-Inspired Method for Continual Learning [[paper](https://arxiv.org/abs/2003.03143)] - **[2021 PRL]** ACAE-REMIND for Online Continual Learning with Compressed Feature Replay [[paper](https://arxiv.org/abs/2105.08595)] - **[2021 NeurIPS]** SSUL Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning [[paper](https://arxiv.org/abs/2106.11562)][[code](https://github.com/clovaai/SSUL)] - **[2021 NeurIPS]** RMM: Reinforced Memory Management for Class-Incremental Learning [[paper](https://arxiv.org/abs/2301.05792)][[code](https://gitlab.mpi-klsb.mpg.de/yaoyaoliu/rmm/)] - **[2021 NeurIPS]** Posterior Meta-Replay for Continual Learning [[paper](https://arxiv.org/abs/2103.01133)][[code](https://github.com/chrhenning/posterior_replay_cl)] - **[2021 NeurIPS]** Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima [[paper](https://arxiv.org/abs/2111.01549)][[code](https://github.com/moukamisama/f2m)] - **[2021 NeurIPS]** Optimizing Reusable Knowledge for Continual Learning via Metalearning [[paper](https://arxiv.org/abs/2106.05390)][[code](https://github.com/JuliousHurtado/meta-training-setup)] - **[2021 NeurIPS]** Natural continual learning success is a journey, not (just) a destination [[paper](https://arxiv.org/abs/2106.08085)][[code](https://github.com/tachukao/ncl)] - **[2021 NeurIPS]** Mitigating Forgetting in Online Continual Learning with Neuron Calibration [[paper](https://arxiv.org/abs/2211.05347)] - **[2021 NeurIPS]** Lifelong Domain Adaptation via Consolidated Internal Distribution [[paper](https://openreview.net/forum?id=lpW-UP8VKcg)] - **[2021 NeurIPS]** Learning where to learn: Gradient sparsity in meta and continual learning [[paper](https://arxiv.org/abs/2110.14402)][[code](https://github.com/johswald/learning_where_to_learn)] - **[2021 NeurIPS]** Gradient-based Editing of Memory Examples for Online Task-free Continual Learning [[paper](https://arxiv.org/abs/2006.15294) - **[2021 NeurIPS]** Generative vs Discriminative: Rethinking The Meta-Continual Learning [[paper](https://openreview.net/forum?id=soDi-HkzC1)][[code](https://github.com/aminbana/gemcl)] - **[2021 NeurIPS]** Formalizing the Generalization-Forgetting Trade-Off in Continual Learning [[paper](https://arxiv.org/abs/2109.14035)] - **[2021 NeurIPS]** Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning [[paper](https://arxiv.org/abs/2110.04593)][[code](https://github.com/danruod/fs-dgpm)] - **[2021 NeurIPS]** DualNet: Continual Learning, Fast and Slow [[paper](https://arxiv.org/abs/2110.00175)][[code](https://github.com/phquang/DualNet)] - **[2021 NeurIPS]** Continual World: A Robotic Benchmark For Continual Reinforcement Learning [[paper](https://arxiv.org/abs/2105.10919)] - **[2021 NeurIPS]** Continual Learning via Local Module Composition [[paper](https://arxiv.org/abs/2111.07736)] - **[2021 NeurIPS]** Continual Auxiliary Task Learning [[paper](https://arxiv.org/abs/2202.11133)] - **[2021 NeurIPS]** Class-Incremental Learning via Dual Augmentation [[paper](https://openreview.net/forum?id=8dqEeFuhgMG)][[code](https://github.com/impression2805/il2a)] - **[2021 NeurIPS]** Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection [[paper](https://arxiv.org/abs/2110.15017)][[code](https://github.com/dongnana777/bridging-non-co-occurrence)] - **[2021 NeurIPS]** BooVAE: Boosting Approach for Continual Learning of VAE [[paper](https://arxiv.org/abs/1908.11853)][[code](https://github.com/AKuzina/BooVAE)] - **[2021 NeurIPS]** BNS: Building Network Structures Dynamically for Continual Learning [[paper](https://openreview.net/forum?id=2ybxtABV2Og)] - **[2021 NeurIPS]** AFEC: Active Forgetting of Negative Transfer in Continual Learning [[paper](https://arxiv.org/abs/2110.12187)] - **[2021 NeurIPS]** Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning [[paper](https://arxiv.org/abs/2112.02706)][[code](https://github.com/zixuanke/pycontinual)] - **[2021 NAACL]** Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution [[paper](https://arxiv.org/abs/2103.06799)] - **[2021 NAACL]** Continual Learning for Text Classification with Information Disentanglement Based Regularization [[paper](https://arxiv.org/abs/2104.05489v1)][[code](https://github.com/SALT-NLP/IDBR)] - **[2021 NAACL]** Continual Learning for Neural Machine Translation [[paper](https://aclanthology.org/2021.naacl-main.310/)] - **[2021 NAACL]** Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks [[paper](https://arxiv.org/abs/2112.03271)][[code](https://github.com/zixuanke/pycontinual)] - **[2021 MM]** Video Transformer for Deepfake Detection with Incremental Learning [[paper](https://arxiv.org/abs/2108.05307)] - **[2021 MM]** Remember and Reuse: Cross-Task Blind Image Quality Assessment via Relevance-aware Incremental Learning [[paper](https://doi.org/10.1145/3474085.3475642)] - **[2021 MM]** Co-Transport for Class-Incremental Learning [[paper](https://arxiv.org/abs/2107.12654)][[code](https://github.com/zhoudw-zdw/MM21-Coil)] - **[2021 MM]** An EM Framework for Online Incremental Learning of Semantic Segmentation [[paper](https://arxiv.org/abs/2108.03613)] - **[2021 IJCAI]** TrafficStream: A Streaming Traffic Flow Forecasting Framework Based on Graph Neural Networks and Continual Learning [[paper](https://arxiv.org/abs/2106.06273)][[code](https://github.com/AprLie/TrafficStream)] - **[2021 IJCAI]** Learning with Selective Forgetting [[paper](https://doi.org/10.24963/ijcai.2021/137)] - **[2021 IJCAI]** Knowledge Consolidation based Class Incremental Online Learning with Limited Data [[paper](https://arxiv.org/abs/2106.06795)] - **[2021 IJCAI]** FedSpeech: Federated Text-to-Speech with Continual Learning [[paper](https://arxiv.org/abs/2110.07216)] - **[2021 ICPR]** Semi-Supervised Class Incremental Learning [[paper](https://ieeexplore.ieee.org/abstract/document/9413225)] - **[2021 ICPR]** Class-incremental Learning with Pre-allocated Fixed Classifiers [[paper](https://arxiv.org/abs/2010.08657)][[code](https://github.com/DigiTurk84/class-incremental-polytope)] - **[2021 ICML]** Variational Auto-Regressive Gaussian Processes for Continual Learning [[paper](https://arxiv.org/abs/2006.05468)][[code](https://github.com/uber-research/vargp)] - **[2021 ICML]** Kernel Continual Learning [[paper](https://arxiv.org/abs/2107.05757)][[code](https://github.com/mmderakhshani/KCL)] - **[2021 ICML]** GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning [[paper](https://arxiv.org/abs/2102.07868)][[code](https://github.com/IdanAchituve/GP-Tree)] - **[2021 ICML]** Federated Continual Learning with Weighted Inter-client Transfer [[paper](https://arxiv.org/abs/2003.03196)][[code](https://github.com/wyjeong/FedWeIT)] - **[2021 ICML]** Continuous Coordination As a Realistic Scenario for Lifelong Learning [[paper](https://arxiv.org/abs/2103.03216)][[code](https://github.com/chandar-lab/Lifelong-Hanabi)] - **[2021 ICML]** Continual Learning in the Teacher-Student Setup Impact of Task Similarity [[paper](https://arxiv.org/pdf/2107.04384.pdf)][[code](https://github.com/seblee97/student_teacher_catastrophic)] - **[2021 ICML]** Bayesian Structural Adaptation for Continual Learning [[paper](https://arxiv.org/abs/1912.03624)][[code](https://github.com/scakc/NPBCL)] - **[2021 ICLR]** Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting [[paper](https://openreview.net/pdf?id=tHgJoMfy6nI)][[code](https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons)] - **[2021 ICLR]** Linear Mode Connectivity in Multitask and Continual Learning [[paper](https://arxiv.org/abs/2010.04495)][[code](https://github.com/imirzadeh/MC-SGD)] - **[2021 ICLR]** Gradient Projection Memory for Continual Learning [[paper](https://arxiv.org/abs/2103.09762)][[code](https://github.com/sahagobinda/GPM)] - **[2021 ICLR]** Generalized Variational Continual Learning [[paper](https://openreview.net/forum?id=:IM-AfFhna9)] - **[2021 ICLR]** Efficient Continual Learning with Modular Networks and Task-Driven Priors [[paper](https://openreview.net/forum?id=EKV158tSfwv)][[code1](https://github.com/facebookresearch/CTrLBenchmark)/[code2](https://github.com/TomVeniat/MNTDP)] - **[2021 ICLR]** EEC: Learning to Encode and Regenerate Images for Continual Learning [[paper](https://arxiv.org/abs/2101.04904)][[code](https://github.com/aliayub7/EEC)] - **[2021 ICLR]** CPR: Classifier-Projection Regularization for Continual Learning [[paper](https://openreview.net/forum?id=F2v4aqEL6ze)][[code](https://github.com/csm9493/CPR_CL)] - **[2021 ICLR]** Continual Learning in Recurrent Neural Networks [[paper](https://openreview.net/forum?id=8xeBUgD8u9)][[code](https://github.com/mariacer/cl_in_rnns)] - **[2021 ICLR]** Contextual Transformation Networks for Online Continual Learning [[paper](https://openreview.net/forum?id=zx:uX-BO7CH)] - **[2021 ICCV]** Wanderlust: Online Continual Object Detection in the Real World [[paper](https://arxiv.org/abs/2108.11005)][[code](https://github.com/oakdata/benchmark)] - **[2021 ICCV]** Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces [[paper](https://ieeexplore.ieee.org/document/9711372)] - **[2021 ICCV]** Striking a Balance between Stability and Plasticity for Class-Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9711484)] - **[2021 ICCV]** SS-IL: Separated Softmax for Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9710553)] - **[2021 ICCV]** Rehearsal revealed: The limits and merits of revisiting samples in continual learning [[paper](https://arxiv.org/abs/2104.07446)][[code](https://github.com/Mattdl/RehearsalRevealed)] - **[2021 ICCV]** RECALL: Replay-based Continual Learning in Semantic Segmentation [[paper](https://arxiv.org/abs/2108.03673)][[code](https://github.com/lttm/recall)] - **[2021 ICCV]** Online Continual Learning with Natural Distribution Shifts An Empirical Study with Visual Data [[paper](https://arxiv.org/abs/2108.09020)][[code](https://github.com/intellabs/continuallearning)] - **[2021 ICCV]** Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting [[paper](https://arxiv.org/abs/2108.08165)][[code](https://github.com/annusha/lcwof)] - **[2021 ICCV]** Few-Shot and Continual Learning with Attentive Independent Mechanisms [[paper](https://arxiv.org/abs/2107.14053)][[code](https://github.com/huang50213/AIM-Fewshot-Continual)] - **[2021 ICCV]** Else-Net: Elastic Semantic Network for Continual Action Recognition from Skeleton Data [[paper](https://ieeexplore.ieee.org/document/9711342)] - **[2021 ICCV]** Detection and Continual Learning of Novel Face Presentation Attacks [[paper](https://arxiv.org/abs/2108.12081)] - **[2021 ICCV]** Continual Prototype Evolution Learning Online from Non-Stationary Data Streams [[paper](https://arxiv.org/abs/2009.00919)][[code](https://github.com/Mattdl/ContinualPrototypeEvolution)] - **[2021 ICCV]** Continual Learning on Noisy Data Streams via Self-Purified Replay [[paper](https://arxiv.org/abs/2110.07735)] - **[2021 ICCV]** Continual Learning for Image-Based Camera Localization [[paper](https://arxiv.org/abs/2108.09112)][[code](https://github.com/aaltovision/cl_hscnet)] - **[2021 ICCV]** Co2L: Contrastive Continual Learning [[paper](https://arxiv.org/abs/2106.14413)][[code](https://github.com/chaht01/co2l)] - **[2021 ICCV]** Class-Incremental Learning for Action Recognition in Videos [[paper](https://arxiv.org/abs/2203.13611)] - **[2021 ICCV]** Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning [[paper](https://arxiv.org/abs/2106.09701)][[code](https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL)] - **[2021 EMNLP]** Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers [[paper](https://arxiv.org/abs/2109.05186)][[code](https://github.com/zhuang-li/cl_nsp)] - **[2021 EMNLP]** ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning [[paper](https://arxiv.org/abs/2012.15283)][[code](https://github.com/pluslabnlp/econet)] - **[2021 EMNLP]** Continual Learning in Task-Oriented Dialogue Systems [[paper](https://arxiv.org/abs/2012.15504)][[code](https://github.com/andreamad8/ToDCL)] - **[2021 EMNLP]** Continual Few-Shot Learning for Text Classification [[paper](https://aclanthology.org/2021.emnlp-main.460/)][[code](https://github.com/ramakanth-pasunuru/cfl-benchmark)] - **[2021 EMNLP]** CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks [[paper](https://arxiv.org/abs/2112.02714)][[code](https://github.com/zixuanke/pycontinual)] - **[2021 CVPR]** Training Networks in Null Space of Feature Covariance for Continual Learning [[paper](https://arxiv.org/abs/2103.07113)][[code](https://github.com/ShipengWang/Adam-NSCL)] - **[2021 CVPR]** Towards Open World Object Detection [[paper](https://arxiv.org/abs/2103.02603)][[code](https://github.com/JosephKJ/OWOD)] - **[2021 CVPR]** Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/abs/2103.04059)] - **[2021 CVPR]** Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning [[paper](https://arxiv.org/abs/2107.08918)][[code](https://github.com/zhukaii/SPPR)] - **[2021 CVPR]** Rectification-based Knowledge Retention for Continual Learning [[paper](None)] - **[2021 CVPR]** Rainbow Memory Continual Learning with a Memory of Diverse Samples [[paper](https://arxiv.org/abs/2103.17230)][[code](https://github.com/clovaai/rainbow-memory)] - **[2021 CVPR]** Prototype Augmentation and Self-Supervision for Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9578909)][[code](https://github.com/g-u-n/pycil)] - **[2021 CVPR]** PLOP: Learning without Forgetting for Continual Semantic Segmentation [[paper](https://arxiv.org/abs/2011.11390)][[code](https://github.com/arthurdouillard/CVPR2021_PLOP)] - **[2021 CVPR]** ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning [[paper](https://arxiv.org/abs/2101.00407)] - **[2021 CVPR]** On Learning the Geodesic Path for Incremental Learning [[paper](https://arxiv.org/abs/2104.08572)][[code](https://github.com/chrysts/geodesic_continual_learning)] - **[2021 CVPR]** Lifelong Person Re-Identification via Adaptive Knowledge Accumulation [[paper](https://arxiv.org/abs/2103.12462)][[code](https://github.com/TPCD/LifelongReID)] - **[2021 CVPR]** Layerwise Optimization by Gradient Decomposition for Continual Learning [[paper](https://arxiv.org/abs/2105.07561)] - **[2021 CVPR]** Incremental Learning via Rate Reduction [[paper](https://arxiv.org/abs/2011.14593)] - **[2021 CVPR]** Incremental Few-Shot Instance Segmentation [[paper](https://arxiv.org/abs/2105.05312)][[code](https://github.com/danganea/iMTFA)] - **[2021 CVPR]** Image De-raining via Continual Learning [[paper](https://ieeexplore.ieee.org/document/9577362/)] - **[2021 CVPR]** IIRC: Incremental Implicitly-Refined Classification [[paper](https://arxiv.org/abs/2012.12477)][[code](https://github.com/chandar-lab/IIRC)] - **[2021 CVPR]** Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation [[paper](https://ieeexplore.ieee.org/document/9578318)] - **[2021 CVPR]** Few-Shot Incremental Learning with Continually Evolved Classifiers [[paper](https://arxiv.org/abs/2104.03047)][[code](https://github.com/icoz69/cec-cvpr2021)] - **[2021 CVPR]** Efficient Feature Transformations for Discriminative and Generative Continual Learning [[paper](https://arxiv.org/abs/2103.13558)][[code](https://github.com/vkverma01/EFT)] - **[2021 CVPR]** Distilling Causal Effect of Data in Class-Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9578597)][[code](https://github.com/JoyHuYY1412/DDE_CIL)] - **[2021 CVPR]** DER: Dynamically Expandable Representation for Class Incremental Learning [[paper](https://arxiv.org/abs/2103.16788)][[code](https://github.com/Rhyssiyan/DER-ClassIL.pytorch)] - **[2021 CVPR]** Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations [[paper](https://arxiv.org/abs/2103.06342)] - **[2021 CVPR]** Continual Learning via Bit-Level Information Preserving [[paper](https://arxiv.org/abs/2105.04444v1)] - **[2021 CVPR]** Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning [[paper](https://ieeexplore.ieee.org/document/9578295/)] - **[2021 CVPR]** Adaptive Aggregation Networks for Class-Incremental Learning [[paper](https://arxiv.org/abs/2010.05063)][[code](https://github.com/yaoyao-liu/class-incremental-learning)] - **[2021 CVIU]** SID: Incremental Learning for Anchor-Free Object Detection via Selective and Inter-Related Distillation [[paper](https://arxiv.org/abs/2012.15439)] - **[2021 CVIU]** Knowledge Distillation for Incremental Learning in Semantic Segmentation [[paper](https://arxiv.org/abs/1911.03462)] - **[2021 BMVC]** Self-Supervised Training Enhances Online Continual Learning [[paper](https://arxiv.org/abs/2103.14010)] - **[2021 AISTATS]** Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors [[paper](https://arxiv.org/abs/2004.10098)] - **[2021 AISTATS]** A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix [[paper](https://arxiv.org/abs/2010.04003)][[code](https://github.com/tldoan/PCA-OGD)] - **[2021 AAAI]** Using Hindsight to Anchor Past Knowledge in Continual Learning [[paper](https://arxiv.org/abs/2002.08165)] - **[2021 AAAI]** Unsupervised Model Adaptation for Continual Semantic Segmentation [[paper](https://arxiv.org/pdf/2009.12518v1.pdf)] - **[2021 AAAI]** Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network [[paper](https://arxiv.org/abs/2107.01349)][[code](https://github.com/bigdata-inha/Split-and-Bridge)] - **[2021 AAAI]** Online Class-Incremental Continual Learning with Adversarial Shapley Value [[paper](https://arxiv.org/abs/2009.00093)][[code](https://github.com/RaptorMai/online-continual-learning)] - **[2021 AAAI]** Lifelong and Continual Learning Dialogue Systems Learning during Conversation [[paper](https://arxiv.org/abs/2211.06553)] - **[2021 AAAI]** Gradient Regularized Contrastive Learning for Continual Domain Adaptation [[paper](https://arxiv.org/abs/2103.12294v1)] - **[2021 AAAI]** Few-Shot Lifelong Learning [[paper](https://arxiv.org/abs/2103.00991)] - **[2021 AAAI]** Few-Shot Class-Incremental Learning via Relation Knowledge Distillation [[paper](https://doi.org/10.1609/aaai.v35i2.16213)] - **[2021 AAAI]** Curriculum-Meta Learning for Order-Robust Continual Relation Extraction [[paper](https://arxiv.org/abs/2101.01926)][[code](https://github.com/wutong8023/AAAI-CML)] - **[2021 AAAI]** Continual Learning for Named Entity Recognition [[paper](https://aclanthology.org/2022.findings-acl.179.pdf)] - **[2021 AAAI]** Continual Learning by Using Information of Each Class Holistically [[paper](https://doi.org/10.1609/aaai.v35i9.16952)] - **[2021 AAAI]** Class-Incremental Instance Segmentation via Multi-Teacher Networks [[paper](https://doi.org/10.1609/aaai.v35i2.16238)] - **[2021 AAAI]** A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation [[paper](https://doi.org/10.1609/aaai.v35i7.16752)] ### 2020 - **[2020 WACV]** ScaIL: Classifier Weights Scaling for Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9093562/)][[code](https://github.com/EdenBelouadah/class-incremental-learning)] - **[2020 WACV]** Class-incremental Learning via Deep Model Consolidation [[paper](https://arxiv.org/abs/1903.07864)] - **[2020 TPAMI]** RPSNet An Adaptive Random Path Selection Approach for Incremental Learning [[paper](https://arxiv.org/abs/1906.01120)] - **[2020 TPAMI]** Continual Learning Using Bayesian Neural Networks [[paper](https://arxiv.org/abs/1910.04112)] - **[2020 PRL]** Faster ILOD Incremental Learning for Object Detectors based on Faster RCNN [[paper](https://arxiv.org/abs/2003.03901)] - **[2020 NeurIPS]** Understanding the Role of Training Regimes in Continual Learning [[paper](https://arxiv.org/abs/2006.06958)][[code](https://github.com/imirzadeh/stable-continual-learning)] - **[2020 NeurIPS]** RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning [[paper](https://openreview.net/forum?id=DlhyudbShm)][[code](https://github.com/delchiaro/RATT)] - **[2020 NeurIPS]** Organizing recurrent network dynamics by task-computation to enable continual learning [[paper](https://dl.acm.org/doi/10.5555/3495724.3496930)] - **[2020 NeurIPS]** Online Fast Adaptation and Knowledge Accumulation (OSAKA) a New Approach to Continual Learning [[paper](https://arxiv.org/abs/2003.05856)] - **[2020 NeurIPS]** Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization [[paper](https://dl.acm.org/doi/abs/10.5555/3495724.3497189)] - **[2020 NeurIPS]** Meta-Consolidation for Continual Learning [[paper](https://arxiv.org/abs/2010.00352)][[code](https://github.com/JosephKJ/merlin)] - **[2020 NeurIPS]** Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting [[paper](https://arxiv.org/abs/2007.07011)][[code](https://github.com/Lifelong-ML/LPG-FTW)] - **[2020 NeurIPS]** La-MAML: Look-ahead Meta Learning for Continual Learning [[paper](https://arxiv.org/abs/2007.13904)][[code](https://github.com/montrealrobotics/La-MAML)] - **[2020 NeurIPS]** GAN Memory with No Forgetting [[paper](https://arxiv.org/abs/2006.07543)][[code](https://github.com/MiaoyunZhao/GANmemory_LifelongLearning)] - **[2020 NeurIPS]** Dark Experience for General Continual Learning a Strong, Simple Baseline [[paper](https://arxiv.org/abs/2004.07211)][[code](https://github.com/aimagelab/mammoth)] - **[2020 NeurIPS]** Coresets via Bilevel Optimization for Continual Learning and Streaming [[paper](https://arxiv.org/abs/2006.03875)][[code](https://github.com/zalanborsos/bilevel_coresets)] - **[2020 NeurIPS]** Continual Learning with Node-Importance based Adaptive Group Sparse Regularization [[paper](https://arxiv.org/abs/2003.13726)] - **[2020 NeurIPS]** Continual Learning of Control Primitives: Skill Discovery via Reset-Games [[paper](https://arxiv.org/abs/2011.05286)][[code](https://github.com/siddharthverma314/clcp-neurips-2020)] - **[2020 NeurIPS]** Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks [[paper](https://arxiv.org/abs/2112.10017)][[code1](https://github.com/zixuanke/pycontinual)/[code2](https://github.com/ZixuanKe/CAT)] - **[2020 NeurIPS]** Continual Learning in Low-rank Orthogonal Subspaces [[paper](https://arxiv.org/abs/2010.11635)][[code](https://github.com/arslan-chaudhry/orthog_subspace)] - **[2020 NeurIPS]** Continual Deep Learning by Functional Regularisation of Memorable Past [[paper](https://openreview.net/pdf?id=ib:vapuIazp)][[code](https://github.com/team-approx-bayes/fromp)] - **[2020 NeurIPS]** Calibrating CNNs for Lifelong Learning [[paper](https://dl.acm.org/doi/abs/10.5555/3495724.3497031)] - **[2020 Nat Comm]** Brain-inspired replay for continual learning with artificial neural networks [[paper](https://www.nature.com/articles/s41467-020-17866-2)][[code](https://github.com/GMvandeVen/brain-inspired-replay)] - **[2020 IJCNN]** OvA-INN: Continual Learning with Invertible Neural Networks [[paper](https://ieeexplore.ieee.org/document/9206766/)] - **[2020 ICML]** XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning [[paper](https://arxiv.org/abs/2003.08561)][[code](https://github.com/EdwinKim3069/XtarNet)] - **[2020 ICML]** Optimal Continual Learning has Perfect Memory and is NP-HARD [[paper](https://arxiv.org/abs/2006.05188)] - **[2020 ICML]** Online Learned Continual Compression with Adaptive Quantization Modules [[paper](https://arxiv.org/abs/1911.08019)][[code](https://github.com/pclucas14/adaptive-quantization-modules)] - **[2020 ICML]** Online Continual Learning from Imbalanced Data [[paper](https://dl.acm.org/doi/10.5555/3524938.3525120)] - **[2020 ICML]** Neural Topic Modeling with Continual Lifelong Learning [[paper](https://arxiv.org/abs/2006.10909)][[code](https://github.com/pgcool/Lifelong-Neural-Topic-Modeling)] - **[2020 ICMLW]** Wandering Within a World Online Contextualized Few-Shot Learning [[paper](https://openreview.net/pdf/798a88cd0aefedd9aab888bc91f17fb86841e232.pdf)][[code](https://github.com/renmengye/oc-fewshot-public)] - **[2020 ICMLW]** Variational Beam Search for Continual Learning [[zoom](https://icml.cc/virtual/2020/8260)] - **[2020 ICMLW]** Variational Auto-Regressive Gaussian Processes for Continual Learning [[paper](https://arxiv.org/abs/2006.05468)][[code](https://github.com/uber-research/vargp)] - **[2020 ICMLW]** Understanding Regularisation Methods for Continual Learning [[paper](https://arxiv.org/abs/2006.06357v1)] - **[2020 ICMLW]** UNCLEAR: A Straightforward Method for Continual Reinforcement Learning [[paper](https://www.oxford-man.ox.ac.uk/wp-content/uploads/2020/11/UNCLEAR-A-Straightforward-Method-for-Continual-Reinforcement-Learning.pdf)] - **[2020 ICMLW]** Task-Agnostic Continual Learning via Stochastic Synapses [[zoom](https://icml.cc/virtual/2020/8253)] - **[2020 ICMLW]** Supermasks in Superposition [[paper](https://arxiv.org/abs/2006.14769)][[code](https://github.com/RAIVNLab/supsup)] - **[2020 ICMLW]** SOLA: Continual Learning with Second-Order Loss Approximation [[paper](https://arxiv.org/abs/2006.10974)] - **[2020 ICMLW]** Routing Networks with Co-training for Continual Learning [[paper](https://arxiv.org/abs/2009.04381)] - **[2020 ICMLW]** On Class Orderings for Incremental Learning [[paper](https://arxiv.org/abs/2007.02145)] - **[2020 ICMLW]** Deep Reinforcement Learning amidst Lifelong Non-Stationarity [[paper](https://openreview.net/pdf?id=P1OwHAhDVbd)] - **[2020 ICMLW]** Continual Reinforcement Learning with Multi-Timescale Replay [[paper](https://arxiv.org/abs/2004.07530)][[code](https://github.com/ChristosKap/multi_timescale_replay)] - **[2020 ICMLW]** Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization [[paper](https://arxiv.org/abs/2007.03032)][[code](https://github.com/srvCodes/continual-learning-benchmark)] - **[2020 ICMLW]** Continual Learning from the Perspective of Compression [[paper](https://arxiv.org/abs/2006.15078)] - **[2020 ICMLW]** Combining Variational Continual Learning with FiLM Layers [[paper](https://openreview.net/forum?id=fZBEGA1d-4Y)] - **[2020 ICMLW]** Anatomy of Catastrophic Forgetting Hidden Representations and Task Semantics [[paper](https://openreview.net/forum?id=LhY8QdUGSuw)] - **[2020 ICMLW]** A General Framework for Continual Learning of Compositional Structures [[paper](https://www.cis.upenn.edu/~eeaton/papers/Mendez2020General.pdf)] - **[2020 ICLR]** Uncertainty-guided Continual Learning with Bayesian Neural Networks [[paper](https://openreview.net/pdf?id=HklUCCVKDB)][[code](https://github.com/SaynaEbrahimi/UCB)] - **[2020 ICLR]** Scalable and Order-robust Continual Learning with Additive Parameter Decomposition [[paper](https://arxiv.org/abs/1902.09432)][[code](https://github.com/iclr2020-apd/anonymous_iclr2020_apd_code)] - **[2020 ICLR]** Functional Regularisation for Continual Learning with Gaussian Processes [[paper](https://arxiv.org/abs/1901.11356)][[code](https://github.com/AndreevP/FRCL)] - **[2020 ICLR]** Continual Learning with Hypernetworks [[paper](https://openreview.net/pdf?id=SJgwNerKvB)][[code](https://github.com/chrhenning/hypercl)] - **[2020 ICLR]** Continual Learning with Bayesian Neural Networks for Non-Stationary Data [[paper](https://openreview.net/forum?id=SJlsFpVtDB)] - **[2020 ICLR]** Continual Learning with Adaptive Weights (CLAW) [[paper](https://arxiv.org/abs/1911.09514)] - **[2020 ICLR]** Compositional Language Continual Learning [[paper](https://openreview.net/pdf?id=rklnDgHtDS)][[code](https://github.com/yli1/CLCL)] - **[2020 ICLR]** A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning [[paper](https://arxiv.org/abs/2001.00689)][[code](https://github.com/soochan-lee/CN-DPM)] - **[2020 EMNLP]** Visually Grounded Continual Learning of Compositional Phrases [[paper](https://aclanthology.org/2020.emnlp-main.158/)][[code](https://github.com/INK-USC/VisCOLL)] - **[2020 EMNLP]** Disentangle-based Continual Graph Representation Learning [[paper](https://aclanthology.org/2020.emnlp-main.237/)][[code](https://github.com/KXY-PUBLIC/DiCGRL)] - **[2020 EMNLP]** Continual Learning for Natural Language Generation in Task-oriented Dialog Systems [[paper](https://aclanthology.org/2020.findings-emnlp.310/)] - **[2020 ECCV]** Topology-Preserving Class-Incremental Learning [[paper](oi.org/10.1007/978-3-030-58529-7:16)] - **[2020 ECCV]** Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks [[paper](https://arxiv.org/abs/1912.13503)] - **[2020 ECCV]** Reparameterizing Convolutions for Incremental Multi-Task Learning without Task Interference [[paper](https://arxiv.org/abs/2007.12540)][[code](https://github.com/menelaoskanakis/RCM)] - **[2020 ECCV]** REMIND Your Neural Network to Prevent Catastrophic Forgetting [[paper](https://arxiv.org/abs/1910.02509)][[code](https://github.com/tyler-hayes/REMIND)] - **[2020 ECCV]** PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning [[paper](https://arxiv.org/abs/2004.13513)][[code](https://github.com/arthurdouillard/incremental_learning.pytorch)] - **[2020 ECCV]** Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation [[paper](https://arxiv.org/abs/2104.11939)] - **[2020 ECCV]** Online Continual Learning under Extreme Memory Constraints [[paper](https://arxiv.org/abs/2008.01510)][[code](https://github.com/DonkeyShot21/batch-level-distillation)] - **[2020 ECCV]** More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning [[paper](https://doi.org/10.1007/978-3-030-58574-7:42)][[code](https://github.com/liuyudut/MUC)] - **[2020 ECCV]** Memory-Efficient Incremental Learning Through Feature Adaptation [[paper](https://arxiv.org/abs/2004.00713)] - **[2020 ECCV]** Learning latent representations across multiple data domains using Lifelong VAEGAN [[paper](https://arxiv.org/abs/2007.10221)][[code](https://github.com/dtuzi123/LifelongVAEGAN)] - **[2020 ECCV]** Incremental Meta-Learning via Indirect Discriminant Alignment [[paper](https://arxiv.org/pdf/2002.04162.pdf)] - **[2020 ECCV]** Imbalanced Continual Learning with Partitioning Reservoir Sampling [[paper](https://arxiv.org/abs/2009.03632)] - **[2020 ECCV]** GDumb: A Simple Approach that Questions Our Progress in Continual Learning [[paper](https://openreview.net/forum?id=zeLEHYJhHp)][[code](https://github.com/drimpossible/GDumb)] - **[2020 ECCV]** Class-Incremental Domain Adaptation [[paper](https://arxiv.org/abs/2107.11091)] - **[2020 ECCV]** Adversarial Continual Learning [[paper](https://arxiv.org/abs/2003.09553)][[code](https://github.com/facebookresearch/Adversarial-Continual-Learning)] - **[2020 ECAI]** Learning to Continually Learn [[paper](https://arxiv.org/pdf/2002.09571.pdf)] - **[2020 CVPR]** Semantic Drift Compensation for Class-Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9156964/)][[code](https://github.com/yulu0724/SDC-IL)] - **[2020 CVPR]** Modeling the Background for Incremental Learning in Semantic Segmentation [[paper](https://arxiv.org/abs/2002.00718)] - **[2020 CVPR]** Mnemonics Training: Multi-Class Incremental Learning without Forgetting [[paper](https://openreview.net/forum?id=JSmccXnFPPF)][[code](https://github.com/yaoyao-liu/class-incremental-learning)] - **[2020 CVPR]** Maintaining Discrimination and Fairness in Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9156766/)] - **[2020 CVPR]** iTAML: An Incremental Task-Agnostic Meta-learning Approach [[paper](https://arxiv.org/abs/2003.11652)][[code](https://github.com/brjathu/iTAML)] - **[2020 CVPR]** Incremental Learning In Online Scenario [[paper](https://ieeexplore.ieee.org/document/9156990/)] - **[2020 CVPR]** Incremental Few-Shot Object Detection [[paper](https://ieeexplore.ieee.org/document/9157715/)] - **[2020 CVPR]** Few-Shot Class-Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9157521)] - **[2020 CVPR]** Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion [[paper](https://arxiv.org/abs/1912.08795)][[code](https://github.com/NVlabs/DeepInversion)] - **[2020 CVPR]** Continual Learning with Extended Kronecker-factored Approximate Curvature [[paper](https://ieeexplore.ieee.org/document/9157569)] - **[2020 CVPR]** Conditional Channel Gated Networks for Task-Aware Continual Learning [[paper](https://ieeexplore.ieee.org/document/9156310/)] - **[2020 CVPRW]** What is Happening Inside a Continual Learning Model: A Representation-Based Evaluation of Representational Forgetting [[paper](https://ieeexplore.ieee.org/document/9150688)] - **[2020 CVPRW]** Stream-51: Streaming Classification and Novelty Detection from Videos [[paper](https://ieeexplore.ieee.org/document/9150885)] - **[2020 CVPRW]** StackNet: Stacking feature maps for Continual learning [[paper](https://ieeexplore.ieee.org/document/9150740/)] - **[2020 CVPRW]** Relationship Matters Relation Guided Knowledge Transfer for Incremental Learning of Object Detectors [[paper](https://ieeexplore.ieee.org/document/9150833/)] - **[2020 CVPRW]** Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches [[paper](https://ieeexplore.ieee.org/document/9150818/)] - **[2020 CVPRW]** Reducing catastrophic forgetting with learning on synthetic data [[paper](https://ieeexplore.ieee.org/document/9150615/)] - **[2020 CVPRW]** Noise-Based Selection of Robust Inherited Model for Accurate Continual Learning [[paper](https://ieeexplore.ieee.org/abstract/document/9150982)] - **[2020 CVPRW]** Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis [[paper](https://ieeexplore.ieee.org/document/9150601/)][[code](https://github.com/tyler-hayes/Deep_SLDA)] - **[2020 CVPRW]** Generative Feature Replay For Class-Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9150851/)][[code](https://github.com/xialeiliu/GFR-IL)] - **[2020 CVPRW]** Generating Accurate Pseudo Examples for Continual Learning [[paper](https://ieeexplore.ieee.org/document/9150934/)] - **[2020 CVPRW]** Generalized Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9150844/)] - **[2020 CVPRW]** Dropout as an Implicit Gating Mechanism For Continual Learning [[paper](https://arxiv.org/abs/2004.11545)][[code](https://github.com/imirzadeh/stable-continual-learning)][[code](https://github.com/imirzadeh/stable-continual-learning)] - **[2020 CVPRW]** Continual Reinforcement Learning in 3D Non-stationary Environments [[paper](https://arxiv.org/abs/1905.10112)][[code](https://github.com/Pervasive-AI-Lab/crlmaze)][[code](https://github.com/Pervasive-AI-Lab/crlmaze)] - **[2020 CVPRW]** Continual Learning of Object Instances [[paper](https://arxiv.org/abs/2004.10862)] - **[2020 CVPRW]** Continual Learning for Anomaly Detection in Surveillance Videos [[paper](https://arxiv.org/abs/2004.07941)] - **[2020 CVPRW]** Cognitively-Inspired Model for Incremental Learning Using a Few Examples [[paper](https://ieeexplore.ieee.org/document/9150667/)][[cdde](https://github.com/aliayub7/CBCL)] - **[2020 CVPRW]** CatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognition [[paper](https://arxiv.org/abs/2004.09215)] - **[2020 COLING]** Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation [[paper](https://aclanthology.org/2020.coling-main.381/)] - **[2020 COLING]** Distill and Replay for Continual Language Learning [[paper](https://aclanthology.org/2020.coling-main.318/)] - **[2020 COLING]** Continual Lifelong Learning in Natural Language Processing: A Survey [[paper](https://aclanthology.org/2020.coling-main.574/)] - **[2020 COLING]** A Two-phase Prototypical Network Model for Incremental Few-shot Relation Classification [[paper](https://aclanthology.org/2020.coling-main.142/)] - **[2020 BMVC]** Initial Classifier Weights Replay for Memoryless Class Incremental Learning [[paper](https://arxiv.org/abs/2008.13710)][[code](https://github.com/EdenBelouadah/class-incremental-learning)] - **[2020 AISTATS]** Orthogonal Gradient Descent for Continual Learning [[paper](https://arxiv.org/abs/1910.07104)] - **[2020 ACL]** Continual Relation Learning via Episodic Memory Activation and Reconsolidation [[paper](https://aclanthology.org/2020.acl-main.573/)] - **[2020 AAAI]** Residual Continual Learning [[paper](https://arxiv.org/abs/2002.06774)] - **[2020 AAAI]** Overcoming Catastrophic Forgetting by Neuron-Level Plasticity Control [[paper](https://arxiv.org/pdf/1907.13322)] - **[2020 AAAI]** Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks [[paper](https://doi.org/10.1609/aaai.v34i04.5942)] - **[2020 AAAI]** Generative Continual Concept Learning [[paper](https://arxiv.org/abs/1906.03744)] - **[2020 AAAI]** ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding [[paper](https://doi.org/10.1609/aaai.v34i05.6428)][[code](https://github.com/PaddlePaddle/ERNIE)] - **[2020 AAAI]** Bi-Objective Continual Learning: Learning New While Consolidating Known [[paper](https://doi.org/10.1609/aaai.v34i04.6060)] ### 2019 - **[2019 NIPS]** Uncertainty-based Continual Learning with Adaptive Regularization [[paper](https://arxiv.org/abs/1905.11614)][[code](https://github.com/csm9493/UCL)] - **[2019 NIPS]** RPSNet: Random Path Selection for Incremental Learning [[paper](https://arxiv.org/abs/1906.01120v2)] - **[2019 NIPS]** Reconciling meta-learning and continual learning with online mixtures of tasks [[paper](https://arxiv.org/abs/1812.06080)] - **[2019 NIPS]** Online Continual Learning with Maximally Interfered Retrieval [[paper](https://arxiv.org/abs/1908.04742)][[code](https://github.com/optimass/Maximally_Interfered_Retrieval)] - **[2019 NIPS]** Meta-Learning Representations for Continual Learning [[paper](https://arxiv.org/abs/1905.12588v1)][[code](https://github.com/Khurramjaved96/mrcl)] - **[2019 NIPS]** Incremental Few-Shot Learning with Attention Attractor Networks [[paper](https://arxiv.org/abs/1810.07218)][[code](https://github.com/renmengye/inc-few-shot-attractor-public)] - **[2019 NIPS]** Gradient based sample selection for online continual learning [[paper](https://arxiv.org/abs/1903.08671)][[code](https://github.com/rahafaljundi/Gradient-based-Sample-Selection)] - **[2019 NIPS]** Experience Replay for Continual Learning [[paper](https://arxiv.org/abs/1811.11682)] - **[2019 NIPS]** Episodic Memory in Lifelong Language Learning [[paper](https://arxiv.org/abs/1906.01076)] - **[2019 NIPS]** Compacting, Picking and Growing for Unforgetting Continual Learning [[paper](https://arxiv.org/abs/1910.06562)][[code](https://github.com/ivclab/CPG)] - **[2019 NeurIPS]** Incremental Few-Shot Learning with Attention Attractor Networks [[paper](https://arxiv.org/abs/1810.07218)][[code](https://github.com/renmengye/inc-few-shot-attractor-publichttps://github.com/renmengye/inc-few-shot-attractor-public)] - **[2019 Nat Mat Int]** Continual learning of context-dependent processing in neural networks [[paper](https://arxiv.org/abs/1810.01256)][[code](https://github.com/beijixiong3510/OWM)] - **[2019 NAACL]** Continual Learning for Sentence Representations Using Conceptors [[paper](https://aclanthology.org/N19-1331/)][[code](https://github.com/liutianlin0121/continual-sentence-embedding)] - **[2019 IJCAI]** Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay [[paper](https://arxiv.org/pdf/1903.04566v2.pdf)] - **[2019 ICML]** Policy Consolidation for Continual Reinforcement Learning [[paper](https://arxiv.org/abs/1902.00255v2)] - **[2019 ICML]** Learn to Grow A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting [[paper](https://arxiv.org/abs/1904.00310)] - **[2019 ICME]** An End-to-End Architecture for Class-Incremental Object Detection with Knowledge Distillation [[paper](https://ieeexplore.ieee.org/document/8784755/)] - **[2019 ICLR]** Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation [[paper](https://openreview.net/forum?id=ryGvcoA5YX)] - **[2019 ICLR]** Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference [[paper](https://arxiv.org/abs/1810.11910)][[code](https://github.com/mattriemer/mer)] - **[2019 ICLR]** Efficient Lifelong Learning with A-GEM [[paper](https://arxiv.org/abs/1812.00420)][[code](https://github.com/facebookresearch/agem)] - **[2019 ICLR]** A Comprehensive, Application-Oriented Study of Catastrophic Forgetting in DNNs [[paper](https://arxiv.org/abs/1905.08101)] - **[2019 ICCV]** Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild [[paper](https://arxiv.org/abs/1903.12648)][[code](https://github.com/kibok90/iccv2019-inc)] - **[2019 ICCV]** Lifelong GAN: Continual Learning for Conditional Image Generation [[paper](https://ieeexplore.ieee.org/document/9009516)] - **[2019 ICCV]** Incremental Learning Using Conditional Adversarial Networks [[paper](https://ieeexplore.ieee.org/document/9009031/)] - **[2019 ICCV]** IL2M: Class Incremental Learning With Dual Memory [[paper](https://ieeexplore.ieee.org/document/9009019)] - **[2019 ICCV]** Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation [[paper](https://arxiv.org/abs/1908.02984)] - **[2019 ICCVW]** Incremental Learning Techniques for Semantic Segmentation [[paper](https://ieeexplore.ieee.org/document/9022296/)] - **[2019 EMNLP]** A Progressive Model to Enable Continual Learning for Semantic Slot Filling [[paper](https://aclanthology.org/D19-1126/)] - **[2019 CVPR]** Task-Free Continual Learning [[paper](https://ieeexplore.ieee.org/document/8953745/)] - **[2019 CVPR]** Learning without Memorizing [[paper](https://arxiv.org/abs/1811.08051)] - **[2019 CVPR]** Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning [[paper](https://ieeexplore.ieee.org/document/8953627/)] - **[2019 CVPR]** Learning a Unified Classifier Incrementally via Rebalancing [[paper](https://ieeexplore.ieee.org/document/8953661)] - **[2019 CVPR]** Large Scale Incremental Learning [[paper](https://ieeexplore.ieee.org/document/8954008/)] - **[2019 ACL]** Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering [[paper](https://aclanthology.org/P19-1350/)] - **[2019 ACL]** Incremental Learning from Scratch for Task-Oriented Dialogue Systems [[paper](https://aclanthology.org/P19-1361/)][[code](https://github.com/Leechikara/Incremental-Dialogue-System)] - **[2019 AAAI]** Scalable Recollections for Continual Lifelong Learning [[paper](https://arxiv.org/pdf/1711.06761)] ### 2018 - **[2018 NIPS]** Reinforced Continual Learning [[paper](https://arxiv.org/abs/1805.12369)] - **[2018 NIPS]** Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines [[paper](https://arxiv.org/abs/1810.12488)][[code](https://github.com/GT-RIPL/Continual-Learning-Benchmark)] - **[2018 NIPS]** Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting [[paper](https://arxiv.org/abs/1805.07810)] - **[2018 NIPS]** Memory Replay GANs: learning to generate images from new categories without forgetting [[paper](https://arxiv.org/abs/1809.02058)][[code](https://github.com/WuChenshen/MeRGAN)] - **[2018 NIPS]** Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies [[paper](https://arxiv.org/abs/1808.06508)] - **[2018 NIPSW]** Three scenarios for continual learning [[paper](https://arxiv.org/abs/1904.07734)][[code](https://github.com/GMvandeVen/continual-learning)] - **[2018 NIPSW]** Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines [[paper](https://arxiv.org/abs/1810.12488)][[code](https://github.com/GT-RIPL/Continual-Learning-Benchmark)] - **[2018 ICPR]** Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting [[paper](https://arxiv.org/abs/1802.02950)][[code](https://github.com/xialeiliu/RotateNetworks)] - **[2018 ICML]** Progress & Compress: A scalable framework for continual learning [[paper](https://arxiv.org/abs/1805.06370)] - **[2018 ICML]** Overcoming Catastrophic Forgetting with Hard Attention to the Task [[paper](https://arxiv.org/abs/1801.01423)][[code](https://github.com/joansj/hat] - **[2018 ICML]** Continual Reinforcement Learning with Complex Synapses [[paper](https://arxiv.org/abs/1802.07239)] - **[2018 ICLR]** Variational Continual Learning [[paper](https://arxiv.org/abs/1710.10628)] - **[2018 ICLR]** Lifelong Learning with Dynamically Expandable Networks [[paper](https://arxiv.org/abs/1708.01547)] - **[2018 ICLR]** FearNet: Brain-Inspired Model for Incremental Learning [[paper](https://arxiv.org/abs/1711.10563)] - **[2018 ECCV]** Riemannian Walk for Incremental Learning Understanding Forgetting and Intransigence [[paper](https://arxiv.org/abs/1801.10112)][[code](https://github.com/facebookresearch/agem)] - **[2018 ECCV]** Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights [[paper](https://arxiv.org/abs/1801.06519)][[code](https://github.com/arunmallya/piggyback)] - **[2018 ECCV]** Memory Aware Synapses Learning what (not) to forget [[paper](https://arxiv.org/abs/1711.09601)] - **[2018 ECCV]** Lifelong Learning via Progressive Distillation and Retrospection [[paper](https://doi.org/10.1007/978-3-030-01219-9:27)] - **[2018 ECCV]** End-to-End Incremental Learning [[paper](https://arxiv.org/abs/1807.09536)] - **[2018 CVPR]** PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning [[paper](https://ieeexplore.ieee.org/document/8578908/)][[code](https://github.com/arunmallya/packnet)] - **[2018 BMVC]** Exemplar-Supported Generative Reproduction for Class Incremental Learning [[paper](https://ieeexplore.ieee.org/document/9034001)] - **[2018 AAAI]** Selective Experience Replay for Lifelong Learning [[paper](https://arxiv.org/abs/1802.10269)] ### 2017 - **[2017 arXiv]** PathNet: Evolution Channels Gradient Descent in Super Neural Networks [[paper](https://arxiv.org/abs/1701.08734)] - **[2017 arXiv]** Continual Learning in Generative Adversarial Nets [[paper](https://arxiv.org/abs/1705.08395)] - **[2017 PNAS]** Overcoming catastrophic forgetting in neural networks [[paper](https://arxiv.org/abs/1612.00796)] - **[2017 NIPS]** Overcoming Catastrophic Forgetting by Incremental Moment Matching [[paper](https://arxiv.org/pdf/1703.08475.pdf)][[code](https://github.com/btjhjeon/IMM_tensorflow)] - **[2017 NIPS]** Gradient Episodic Memory for Continual Learning [[paper](https://arxiv.org/abs/1706.08840)][[code](https://github.com/facebookresearch/GradientEpisodicMemory)] - **[2017 NIPS]** Continual Learning with Deep Generative Replay [[paper](https://arxiv.org/abs/1705.08690)] - **[2017 ICML]** Continual Learning Through Synaptic Intelligence [[paper](https://arxiv.org/abs/1703.04200)][[code](https://github.com/ganguli-lab/pathint)] - **[2017 ICCV]** Incremental Learning of Object Detectors without Catastrophic Forgetting [[paper](https://arxiv.org/abs/1708.06977)] - **[2017 ICCV]** Encoder Based Lifelong Learning [[paper](https://ieeexplore.ieee.org/document/8237410/)] - **[2017 CVPR]** iCaRL: Incremental Classifier and Representation Learning [[paper](https://arxiv.org/abs/1611.07725)][[code](https://github.com/srebuffi/iCaRL)] - **[2017 CVPR]** Expert Gate: Lifelong Learning with a Network of Experts [[paper](https://arxiv.org/abs/1611.06194)] - **[2017 CoRL]** CORe50: a New Dataset and Benchmark for Continuous Object Recognition [[paper](https://arxiv.org/abs/1705.03550)] ### 2016 and earlier - **[2016 arXiv]** Progressive Neural Networks [[paper](https://arxiv.org/abs/1606.04671)] - **[2016 ECCV]** Learning without Forgetting [[paper](https://arxiv.org/abs/1606.09282)] - **[2014 ICML]** A PAC-Bayesian Bound for Lifelong Learning [[paper](https://arxiv.org/abs/1311.2838)] ## Citation Please cite our paper if it is helpful to your work: ```bibtex @article{wang2023comprehensive, title={A comprehensive survey of continual learning: Theory, method and application}, author={Wang, Liyuan and Zhang, Xingxing and Su, Hang and Zhu, Jun}, journal={arXiv preprint arXiv:2302.00487}, year={2023} } ```
Owner
- Name: YongSeong
- Login: deeprine
- Kind: user
- Location: YongIn, Republic of Korea
- Company: @KangnamUniversity
- Website: leeyongseong.oopy.io
- Repositories: 23
- Profile: https://github.com/deeprine
AI engineer