This section outlines the experimental setup for the new Instance-Incremental Learning benchmarks.This section outlines the experimental setup for the new Instance-Incremental Learning benchmarks.

Evaluating Instance-Incremental Learning: CIL Methods on Cifar-100 and ImageNet

2025/11/06 01:30

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

5. Experimental results

We reorganize the training set of some existing datasets that are commonly used in the class-incremental learning to establish the benchmarks. Implementation details of our experiments can be found in the supplementary material.

5.1. Experiment Setup

5.1.1 Datasets

\

\ Table 1. Instance-incremental learning on Cifar-100 and ImageNet.The P P reflects the accuracy changing on test data Dtest over 10 IIL tasks. F is the forgetting rate on base training data D(0) after last IIL task. Results are average score and their 95% confidence interval of 5 runs with different incremental data orders. Following previous works, resnet-18 is used as the backbone network for all experiments.

\ Figure 4. Detailed performance promotion (P P) and forgetting rate (F) at each IIL phase. Best to view in color with scaling.

\ ImageNet [24] is another dataset that commonly used. The ImageNet-1000 which consists of 1.2 million training images and 150K testing images from 1000 classes. Following Douillard et al. [4, 6], we randomly select 100 classes (ImageNet-100) and split it into 1 base set with half of the training images and 10 incremental sets with another half of images as we do on Cifar-100.

\ Entity-30 included in BREEDS datasets [25] simulates the real-world sub-population shifting. For example, the base model learns the concept of dog with photos of “Poodles”, but on incremental data it has to extend the “dog” concept to “Terriers” or “Dalmatians”. Entity-30 has 240 subclasses with a large data size. As the sub-population shifting is a specific case of the instance-level concept drift, we evaluate the proposed method on Entity-30 following the setting of ISL [13].

\ 5.1.2 Evaluation metrics

\

\ 5.1.3 Evaluated baselines

\ As few existing method is proposed for the IIL setting, we reproduce several classic and SOTA CIL methods by referring to their original code or paper with the minimum revision, including iCarl [22] and LwF [12] which utilize labellevel distillation, PODNet [4] which implements distillation at the feature level, Der [31] which expends the network dynamically and attains the best CIL results, OnPro [29] which uses online prototypes to enhance the existing boundaries, and online learning [6] which can be applied to the hybrid-incremental learning. ISL [13] proposed for incremental sub-population learning is the only method that can be directly implemented in the new IIL setting. As most CIL methods require old exemplars, to compare with them, we additionally set a memory of 20 exemplars per class for these methods. We aim to provide a fair and comprehensive comparison in the new IIL scenario. Details of reproducing these methods can be found in our supp. material.

\

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

The post Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up appeared on BitcoinEthereumNews.com. Crypto Projects Hyperliquid’s HYPE has seen another disappointing week. The token struggled to hold the $30-$32 price range after 9.9M tokens were unlocked and added to the circulating supply. Many traders are now watching whether HYPE will reclaim the $35 area as support or break down further towards the high $20s. Unlike Hyperliquid, whose trading volume is shrinking, Digitap ($TAP), a rising crypto presale project, has already raised over $2 million in just weeks. This is all thanks to its live omnibank app that combines crypto and fiat tools in a single, seamless account. While popular altcoins stall, whales are channeling capital into early-stage opportunities. This shift is shaping discussions on the best altcoins to buy now in the current market dynamics. Hyperliquid Spot Trades Clustered Between the Low and Mid $30s HYPE price closed the week with an 11% loss. This is because a significant portion of its spot trades are clustered between the low and mid $30s. This leaves the token with a multi-billion-dollar fully diluted valuation on its daily trading volume. Source: CoinMarketCap Moreover, HYPE’s daily RSI is still stuck above $40s, while the short-term averages are continually dropping. This shows an indecisiveness, where the bears and the bulls don’t have clear control of the market. Additionally, roughly 2.6% of the circulating supply is in circulation. After unlocking 9.9M tokens, the Hyperliquid team spent over $600 million on buybacks. This amount often buys only a few million tokens a day. That steady demand is quite small compared to the 9.9 million tokens that were released. This has left the HYPE market with an oversupply. Many HYPE holders are now rotating capital into crypto presale projects, like Digitap, that offer immediate upside. HYPE Market Sentiments Shows Mixed Signals Traders are now projecting mixed sentiments for the token. Some…
Share
BitcoinEthereumNews2025/12/08 22:17