This conclusion validates a novel IIL setting that aims for cost-effective model enhancement using only new data.This conclusion validates a novel IIL setting that aims for cost-effective model enhancement using only new data.

Future of IIL: Narrowing the Gap and Advancing Knowledge Accumulation

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

6. Conclusion and future work

This paper propose a new setting for instance incremental learning task where no old data is available and the target is to enhance the base model with only new observations each time. The new IIL setting is more practical in real deployment either in regards of fast and lowcost model updating or data privacy policy. To tackle the proposed problem, a new decision boundary-aware distillation with knowledge consolidation method is presented. Benchmarks based on existing public datasets are established to evaluate the performance. Extensive experiments demonstrate the effectiveness of the proposed method. However, gap between the IIL model and fulldata model still exists. In IIL, the future work can be done in following directions: 1) Narrowing the gap between IIL model and full-data model; 2) Few-shot IIL; 3) A better manner to accumulate knowledge than the proposed KC.

References

[1] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139–154, 2018. 2

\ [2] Francisco M Castro, Manuel J Mar´ın-Jimenez, Nicol ´ as Guil, ´ Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), pages 233–248, 2018. 2

\ [3] Matthias De Lange and Tinne Tuytelaars. Continual prototype evolution: Learning online from non-stationary data streams. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8250–8259, 2021. 1, 7

\ [4] Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. Podnet: Pooled outputs distillation for small-tasks incremental learning. In European Conference on Computer Vision, pages 86–102. Springer, 2020. 1, 2, 3, 6, 7

\ [5] Haibo He, Sheng Chen, Kang Li, and Xin Xu. Incremental learning from stream data. IEEE Transactions on Neural Networks, 22(12):1901–1914, 2011. 1

\ [6] Jiangpeng He, Runyu Mao, Zeman Shao, and Fengqing Zhu. Incremental learning in online scenario. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13926–13935, 2020. 2, 6, 3

\ [7] Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018. 3

\ [8] Rakib Hyder, Ken Shao, Boyu Hou, Panos Markopoulos, Ashley Prater-Bennette, and M Salman Asif. Incremental task learning with incremental rank updates. In European Conference on Computer Vision, pages 566–582. Springer, 2022. 2

\ [9] Minsoo Kang, Jaeyoo Park, and Bohyung Han. Classincremental learning by knowledge distillation with adaptive feature consolidation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16071–16080, 2022. 2, 3

\ [10] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka GrabskaBarwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017. 2, 8

\ [11] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 2, 5, 1

\ [12] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017. 1, 3, 6, 8, 2

\ [13] Mingfu Liang, Jiahuan Zhou, Wei Wei, and Ying Wu. Balancing between forgetting and acquisition in incremental subpopulation learning. In European Conference on Computer Vision, pages 364–380. Springer, 2022. 3, 6, 7, 8

\ [14] Yu Liu, Sarah Parisot, Gregory Slabaugh, Xu Jia, Ales Leonardis, and Tinne Tuytelaars. More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16, pages 699–716. Springer, 2020. 8

\ [15] Yaoyao Liu, Bernt Schiele, and Qianru Sun. Adaptive aggregation networks for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2544–2553, 2021. 1, 2

\ [16] Vincenzo Lomonaco and Davide Maltoni. Core50: a new dataset and benchmark for continuous object recognition. In Conference on Robot Learning, pages 17–26. PMLR, 2017. 1, 2, 7

\ [17] Yong Luo, Liancheng Yin, Wenchao Bai, and Keming Mao. An appraisal of incremental learning methods. Entropy, 22 (11):1190, 2020. 1

\ [18] Sudhanshu Mittal, Silvio Galesso, and Thomas Brox. Essentials for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3513–3522, 2021. 4

\ [19] Inyoung Paik, Sangjun Oh, Taeyeong Kwak, and Injung Kim. Overcoming catastrophic forgetting by neuron-level plasticity control. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5339–5346, 2020. 2

\ [20] David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350, 2021. 1

\ [21] Nan Pu, Wei Chen, Yu Liu, Erwin M Bakker, and Michael S Lew. Lifelong person re-identification via adaptive knowledge accumulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7901–7910, 2021. 3

\ [22] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010, 2017. 1, 2, 3, 6, 7, 8

\ [23] Amanda Rios and Laurent Itti. Closed-loop memory gan for continual learning. arXiv preprint arXiv:1811.01146, 2018. 2

\ [24] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. 2, 6, 1

\ [25] Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift. In International Conference on Learning Representations, 2021. 6

\ [26] Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, and Liang Wan. Exploring example influence in continual learning. In Advances in Neural Information Processing Systems. 7

\ [27] Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, and Yihong Gong. Bi-objective continual learning: Learning ‘new’while consolidating ‘known’. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5989–5996, 2020. 3

\ [28] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017. 2, 5

\ [29] Yujie Wei, Jiaxin Ye, Zhizhong Huang, Junping Zhang, and Hongming Shan. Online prototype learning for online continual learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18764–18774, 2023. 1, 6, 7, 3

\ [30] Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, and Stefano Soatto. Class-incremental learning with strong pre-trained models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9601–9610, 2022. 2

\ [31] Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014–3023, 2021. 2, 6, 7, 3

\ [32] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995. PMLR, 2017. 2

\ [33] Mengyao Zhai, Lei Chen, Frederick Tung, Jiawei He, Megha Nawhal, and Greg Mori. Lifelong gan: Continual learning for conditional image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2759–2768, 2019. 3

\ [34] Junting Zhang, Jie Zhang, Shalini Ghosh, Dawei Li, Serafettin Tasci, Larry Heck, Heming Zhang, and C-C Jay Kuo. Class-incremental learning via deep model consolidation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1131–1140, 2020. 3

\ [35] Yaqian Zhang, Bernhard Pfahringer, Eibe Frank, Albert Bifet, Nick Jin Sean Lim, and Alvin Jia. A simple but strong baseline for online continual learning: Repeated augmented rehearsal. In Advances in Neural Information Processing Systems. 7

\ [36] Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and ChengLin Liu. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871–5880, 2021. 8

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Market Opportunity
FUTURECOIN Logo
FUTURECOIN Price(FUTURE)
$0,12809
$0,12809$0,12809
+5,83%
USD
FUTURECOIN (FUTURE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Tokyo’s Metaplanet Launches Miami Subsidiary to Amplify Bitcoin Income

Tokyo’s Metaplanet Launches Miami Subsidiary to Amplify Bitcoin Income

Metaplanet Inc., the Japanese public company known for its bitcoin treasury, is launching a Miami subsidiary to run a dedicated derivatives and income strategy aimed at turning holdings into steady, U.S.-based cash flow. Japanese Bitcoin Treasury Player Metaplanet Opens Miami Outpost The new entity, Metaplanet Income Corp., sits under Metaplanet Holdings, Inc. and is based […]
Share
Coinstats2025/09/18 00:32
How A 130-Year-Old Course Reimagined The Golf Experience

How A 130-Year-Old Course Reimagined The Golf Experience

The post How A 130-Year-Old Course Reimagined The Golf Experience appeared on BitcoinEthereumNews.com. An aerial view of Storm King Golf Club, a reimagined golf experience that’s scheduled to open in 2026. Erik Matuszewski In the rolling hills of New York’s Hudson Valley, just 56 miles from Manhattan and minutes from West Point, a revolutionary new golf course is reimagining how golf can be played, experienced, and shared. Named after the nearby mountain that overlooks the property, Storm King Golf Club packs more variety and possibility in 63 acres than many courses four times its size, offering 40 distinct hole configurations, five different 9-hole routing options, and a 19-hole par 3 layout. “The idea was to create a unique place where people could experience golf in a way that’s fun and interesting to them,” said founder David Gang, a software executive who purchased the course about five years ago with a vision to reimagine golf and challenge convention along the way. Storm King is a far cry from the original facility that opened in 1894; today, it’s a wild looking, choose-your-own-adventure playground where golfers can craft their journey based on skill level, mood, or simple curiosity about what lies around the next bend. The facility boasts 12 green complexes totaling 225,000 square feet of putting surface, nearly four times that of an iconic property like Pebble Beach Golf Links, which has 63,000 square feet across all 18 holes. “Our brains have been wired for golf in a very traditional way forever,” says Gang, an avid golfer who co-founded Brightspot, a leading content management system. There are unusual design shapes and unique routing options at Storm King, which was built to focus on versatility, playability and sustainability. Erik Matuszewski “We think about 9 holes, 18 holes, par 3s, par 4s, and par 5s. They’re very set in our minds,” he added. “So, when you come…
Share
BitcoinEthereumNews2025/09/18 18:44
IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32