This section reviews literature related to Instance-Incremental Learning (IIL), contrasting it with the more explored Class-Incremental LearningThis section reviews literature related to Instance-Incremental Learning (IIL), contrasting it with the more explored Class-Incremental Learning

Incremental Learning: Comparing Methods for Catastrophic Forgetting and Model Promotion

2025/11/05 02:00
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

2. Related works

This paper devotes to the instance-incremental learning which is an associated topic to the CIL but seldom investigated. In the following, related topics on class-incremental learning, continual domain adaptation, and methods based on knowledge distillation (KD) are introduced.

\ Class-incremental learning. CIL is proposed to learn new classes without suffering from the notorious catastrophic forgetting problem and is the main topic that most of works focused on in this area. Methods of CIL can be categorized into three types: 1) important weights regularization [1, 10, 19, 32], which constrains the important weights for old tasks and free those unimportant weights for new task. Freezing the weights limits the ability to learn from new data and always lead to a inferior performance on new classes. 2) Rehearsal or pseudo rehearsal method, which stores a small-size of typical exemplars [2, 4, 9, 22] or relies on a generation network to produce old data [23] for old knowledge retaining. Usually, these methods utilize knowledge distillation and perform over the weight regularization method. Although the prototypes of old classes are efficacy in preserving knowledge, they are unable to promote the model’s performance on hard samples, which is always a problem in real deployment. 3) Dynamic network architecture based method [8, 15, 30, 31], which adaptively expenses the network each time for new knowledge learning. However, deploying a changing neural model in real scenarios is troublesome, especially when it goes too big. Although most CIL methods have strong ability in learning new classes, few of them can be directly utilized in the new IIL setting in our test. The reason is that performance promotion on old classes is less emphasized in CIL.

\ Knowledge distillation-based incremental learning. Most of existing incremental learning works utilize knowledge distillation (KD) to mitigate catastrophic forgetting. LwF [12] is one of the earliest approaches that constrains the prediction of new data through KD. iCarl [22] and many other methods distill knowledge on preserved exemplars to free the learning capability on new data. Zhai et al. [33] and Zhang et al. [34] exploit distillation with augmented data and unlabeled auxiliary data at negligible cost. Different from above distillation at label level, Kang et al. [9] and Douillard [4] proposed to distill knowledge at feature level for CIL. Compared to the aforementioned researches, the proposed decision boundary-aware distillation requires no access to old exemplars and is simple but effective in learning new as well as retaining the old knowledge.

\ Comparison with the CDA and ISL. Rencently, some work of continual domain adptation (CDA) [7, 21, 27] and incremental subpopulation learning (ISL) [13] is proposed and has high similarity with the IIL setting. All of the three settings have a fixed label space. The CDA focus on solving the visual domain variations such as illumination and background. ISL is a specific case of CDA and pays more attention to the subcategories within a class, such as Poodles and Terriers. Compared to them, IIL is a more general setting where the concept drift is not limited to the domain shift in CDA or subpopulation shifting problem in ISL. More importantly, the new IIL not only aims to retain the performance but also has to promote the generalization with several new observations in the whole data space.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

시장 기회
Moonveil 로고
Moonveil 가격(MORE)
$0.00003955
$0.00003955$0.00003955
+4.74%
USD
Moonveil (MORE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!