This section outlines the experimental setup for the new Instance-Incremental Learning benchmarks.This section outlines the experimental setup for the new Instance-Incremental Learning benchmarks.

Evaluating Instance-Incremental Learning: CIL Methods on Cifar-100 and ImageNet

2025/11/06 01:30
3분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

5. Experimental results

We reorganize the training set of some existing datasets that are commonly used in the class-incremental learning to establish the benchmarks. Implementation details of our experiments can be found in the supplementary material.

5.1. Experiment Setup

5.1.1 Datasets

\

\ Table 1. Instance-incremental learning on Cifar-100 and ImageNet.The P P reflects the accuracy changing on test data Dtest over 10 IIL tasks. F is the forgetting rate on base training data D(0) after last IIL task. Results are average score and their 95% confidence interval of 5 runs with different incremental data orders. Following previous works, resnet-18 is used as the backbone network for all experiments.

\ Figure 4. Detailed performance promotion (P P) and forgetting rate (F) at each IIL phase. Best to view in color with scaling.

\ ImageNet [24] is another dataset that commonly used. The ImageNet-1000 which consists of 1.2 million training images and 150K testing images from 1000 classes. Following Douillard et al. [4, 6], we randomly select 100 classes (ImageNet-100) and split it into 1 base set with half of the training images and 10 incremental sets with another half of images as we do on Cifar-100.

\ Entity-30 included in BREEDS datasets [25] simulates the real-world sub-population shifting. For example, the base model learns the concept of dog with photos of “Poodles”, but on incremental data it has to extend the “dog” concept to “Terriers” or “Dalmatians”. Entity-30 has 240 subclasses with a large data size. As the sub-population shifting is a specific case of the instance-level concept drift, we evaluate the proposed method on Entity-30 following the setting of ISL [13].

\ 5.1.2 Evaluation metrics

\

\ 5.1.3 Evaluated baselines

\ As few existing method is proposed for the IIL setting, we reproduce several classic and SOTA CIL methods by referring to their original code or paper with the minimum revision, including iCarl [22] and LwF [12] which utilize labellevel distillation, PODNet [4] which implements distillation at the feature level, Der [31] which expends the network dynamically and attains the best CIL results, OnPro [29] which uses online prototypes to enhance the existing boundaries, and online learning [6] which can be applied to the hybrid-incremental learning. ISL [13] proposed for incremental sub-population learning is the only method that can be directly implemented in the new IIL setting. As most CIL methods require old exemplars, to compare with them, we additionally set a memory of 20 exemplars per class for these methods. We aim to provide a fair and comprehensive comparison in the new IIL scenario. Details of reproducing these methods can be found in our supp. material.

\

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

Roll the Dice & Win Up to 1 BTC

Roll the Dice & Win Up to 1 BTCRoll the Dice & Win Up to 1 BTC

Invite friends & share 500,000 USDT!