This article presents an ablation study showing that the proposed IIL method performs well with larger networksThis article presents an ablation study showing that the proposed IIL method performs well with larger networks

Network Size and Task Number: Ablation Study on IIL Performance and Stability

2025/11/12 23:30

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

12. More experimental results

12.1. Ablation study on network size

\ To investigate the impact of network size on the proposed method, we compare the performance of ResNet-18, ResNet-34 and ResNet-50 on the ImageNet-100. As shown in Tab. 5, the proposed method performs well with bigger

\ Table 5. Impact of the network size on the proposed method.

\ Table 6. Performance of the proposed method with different IIL task numbers.

\ network size. When the network size is larger, more parameters can be utilized for new knowledge learning with the proposed decision boundary-aware distillation. Hence, consolidating knowledge from student to teacher causes less forgetting.

\ 12.2. Ablation study on the task number

\ As mentioned in Sec. 7, our method accumulates the error along with the consecutive IIL tasks. However, such a kind of error accumulates slowly and mainly affects the performance on old tasks, i.e. forgetting rate. We further study the impact of task length on the performance of the proposed method by splitting the incremental data into different number of subsets. As shown in Tab. 6, with the incremental of task number, the performance promotion changes less but the forgetting rate increased slightly. Minor variation of performance promotion reveals that the proposed method is stable in learning new knowledge, irrespective of the number of tasks. The acquisition of new knowledge primarily hinges on the volume of new data involved. Although we increase the task number in the experiments, the total number of new data utilized in IIL phase is the same. While increasing the task number will increase the EMA steps, which causes more forgetting on the old data. Experimental results in Tab. 6 well validate our analysis in Sec. 7.

\ Compared to the performance promotion, forgetting on the old data is negligible. Noteworthy, when the task number is relatively small, such as 5 in Tab. 6, the proposed method slightly boosts the model’s performance on the base data. This behavior is similar with full-data model, which demonstrates the capability of our method in accumulating knowledge from new data.

\ 12.3. Detailed comparison between the KC-EMA and vanilla EMA

\ The performance of vanilla EMA and the proposed KCEMA during training is shown in Fig. 11. As can be seen, the student model’s accuracy initially plummets due to the introduction of new data. However, around the 10th epoch,

\ Figure 11. Comparison between the proposed KC-EMA and vanilla EMA during training in the first IIL task, where t denotes the teacher model and s denotes the student model. Result is achieved on Cifar-100.

\ there’s a resurgence in accuracy for both the KC-EMA and vanilla EMA models. Therefore, we empirically set a freezing epoch of 10 in the proposed method.

\ When EMA is applied post the 10th epoch, the teacher model in the vanilla EMA is rapidly drawn towards the student model. This homogenization, however, doesn’t enhance either model. Instead, it leads to a decline in test accuracy due to overfitting to the new data. In contrast, with KC-EMA, both the teacher and student models exhibit gradual growth,, which indicates a knowledge accumulation in these two models. On one hand, consolidating new knowledge to the teacher model improves its test performance. On the other hand, a teacher model equipped with new knowledge liberates the student model to learn new data. That is, constraints from the teacher in distillation is alleviated.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Patriots Hall Of Famer Julian Edelman Is A Rising Media Star At FOX Sports

Patriots Hall Of Famer Julian Edelman Is A Rising Media Star At FOX Sports

The post Patriots Hall Of Famer Julian Edelman Is A Rising Media Star At FOX Sports appeared on BitcoinEthereumNews.com. Julian Edelman has a burgeoning media career, including as an analyst on FOX NFL Kickoff. Lily Hernandez The day before the Kansas City Chiefs hosted the Philadelphia Eagles, Julian Edelman was in a reflective mood. The last weekend he had spent in Arrowhead Stadium was when he helped the New England Patriots defeat the Chiefs in overtime to advance to the Patriots’ last Super Bowl. “I was definitely getting some flashbacks,” Edelman exclusively shared. “It’s definitely a special place to come. Not because we won (but) because we knew how hard it was to win here. This place rocks. Arrowhead is one of the most electric opponent stadiums that we played in. It was one of the greatest to be the villain.” Edelman had seven catches and 96 yards in that 37-31 overtime win against the Chiefs, paving the way for Super Bowl LIII, a game in which he won Super Bowl MVP. That may have been the apex of his playing career, which earned him induction into the Patriots’ Hall of Fame this weekend, but his post-NFL media career is ascending. He’s not only an analyst on FOX NFL Kickoff, the show that precedes FOX NFL Sunday, but also has his own production company and hosts two weekly podcasts. “It kind of (just) happened,” Edelman said. “My goal is really to just be around football in some form or fashion.” Julian Edelman of the New England Patriots celebrates after scoring in the fourth quarter against the Seattle Seahawks during Super Bowl XLIX. (Photo by Kevin C. Cox/Getty Images) Getty Images Toward the end of his playing career, Edelman started creating short-from content for his YouTube channel and picked up a cult following among New England fans. Then for his first two years out of the league, he was an…
Share
BitcoinEthereumNews2025/09/18 21:56