This article presents a novel Decision Boundary-Aware Distillation methodology for Instance-Incremental Learning that requires no access to old data.This article presents a novel Decision Boundary-Aware Distillation methodology for Instance-Incremental Learning that requires no access to old data.

Medical Image Synthesis: S-CycleGAN for RUSS and Segmentation

2025/11/05 23:30
2분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

4. Methodology

As shown in Fig. 2 (a), the occurrence of concept drift in new observations leads to the emergence of outer samples that the existing model fails on. The new IIL has to broaden the decision boundary to these outer samples as well as avoiding the catastrophic forgetting (CF) on the old boundary. Conventional knowledge distillation-based methods rely on some preserved exemplars [22] or auxiliary data [33, 34] to resist CF. However, in the proposed IIL setting, we have no access to any old data other than new observations. Distillation based on these new observations conflicts with learning new knowledge if no new parameters are added to the model. To strike a balance between learning and not forgetting, we propose a decision boundary-aware distillation method that requires no old data. During learning, the new knowledge learned by the student is intermittently consolidated back to the teacher model, which brings better generalization and is a pioneer attempt in this area.

\ Figure 3. Comparison between (a) previous distillation-based method which inferences with student model (S) and (b) the proposed decision boundary-aware distillation (DBD) with knowledge consolidation (KC). We use teacher model (T) for inference.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!