This paper studies active learning with parameter‑efficient fine‑tuning (adapters), showing AL+PEFT improves PLMs in low‑resource text classification.This paper studies active learning with parameter‑efficient fine‑tuning (adapters), showing AL+PEFT improves PLMs in low‑resource text classification.

Teaching Big Models With Less Data: How Adapters + Active Learning Win

2025/08/26 05:13
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

:::info Authors:

(1) Josip Jukic, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia (josip.jukic@fer.hr);

(2) Jan Šnajder, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia (jan.snajder@fer.hr).

:::

Abstract and 1. Introduction

  1. Related Work
  2. Preliminaries
  3. Experiments
  4. Analysis
  5. Conclusion, Limitations, and References

A. Reproducibility

Abstract

Pre-trained language models (PLMs) have ignited a surge in demand for effective finetuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.[1]

\

1 Introduction

Pre-trained language models (PLMs) have quickly become a staple in the field of natural language processing. With the growing demand for data for training these models, developing efficient finetuning methods has become critical. This is particularly relevant for many domains and languages where obtaining large amounts of labeled training data is difficult or downright impossible. In such low-resource settings, it becomes essential to effectively leverage and adapt PLMs while minimizing the need for extensive labeled data.

\ Data labeling is notoriously time-consuming and expensive, often hindering the development of sizable labeled datasets required for training high-performance models. Active learning (AL) (Cohn et al., 1996; Settles, 2009) has emerged as a potential solution to this challenge. In contrast to passive learning, in which the training set is sampled at random, AL encompasses a unique family of machine learning algorithms specifically designed to reduce labeling costs by reducing label complexity, i.e., the number of labels required by an acquisition model to achieve a certain level of performance (Dasgupta, 2011). With the advent of PLMs, AL research has pivoted towards investigating training regimes for PLMs, such as task-adaptive pre-training (TAPT; Gururangan et al., 2020), that could be combined with AL to further reduce the label complexity.

\ While AL aims at directly minimizing the label complexity of learning, training efficiency can also be improved by reducing the parameter complexity of the model. This becomes more important as PLMs grow larger, and fine-tuning becomes increasingly challenging due to the sheer number of parameters involved. To address this issue, adapters (Houlsby et al., 2019) have been introduced as compact modules that can be incorporated between the layers of PLMs. Adapters enable considerable parameter-sharing, facilitating parameterefficient fine-tuning (PEFT) through modular learning (Pfeiffer et al., 2023). In this process, only the parameters of the adapters are updated during the tuning for a specific downstream task. Recent research (He et al., 2021; Li and Liang, 2021; Karimi Mahabadi et al., 2021) has revealed that some PEFT methods outperform full fine-tuning (FFT) in low-resource settings, potentially due to better stability and a decreased risk of overfitting. In contrast, FFT has been shown to exhibit instability in scenarios with limited data.

\ Despite the promising results demonstrated by PEFT methods in low-resource settings, there is a arXiv:2305.14576v2 [cs.CL] 23 Oct 2023 striking gap in research on parameter-efficient training with respect to how PEFT interacts with AL. Given that the majority of real-world AL scenarios involve a restricted amount of data, PEFT methods emerge as strong candidates for AL acquisition models. However, there has been no exploration of AL in conjunction with adapters. Investigating this uncharted territory can further advance our understanding of AL and reveal novel strategies for optimizing performance in low-resource settings.

\ In this paper, we present an empirical study on the behavior of PEFT in low-resource settings for text classification tasks. We analyze PEFT with and without AL and compare it against FFT. While our results confirm that PEFT exhibits superior performance in low-resource setups compared to FFT, we show that the improved performance with PEFT extends to AL scenarios in terms of performance gains over passive learning. Furthermore, we analyze the efficacy of TAPT in conjunction with AL and PEFT. We find that TAPT is beneficial in AL scenarios for both PEFT and fully fine-tuned models, thus representing a viable technique for improving performance in low-resource settings. Finally, aiming to illuminate why PEFT and TAPT improve AL performance in low-resource settings, we analyze the properties of PEFT and FFT via forgetting dynamics (Toneva et al., 2019) and PLMs’ instance-level representations. We find that AL methods choose fewer unforgettable and more moderately forgettable examples when combined with PEFT and TAPT, where forgetfulness indicates the model’s tendency to learn and forget the gold label of a particular instance. Compared to FFT, we observe that PEFT yields representations in the early and middle layers of a model that are more similar to the representations of the base PLM. We hypothesize that this property mitigates the issue of forgetting the knowledge obtained during pretraining when fine-tuning for downstream tasks.

\ In summary, we show that in AL low-resource settings for text classification, (1) PEFT yields greater performance improvements compared to FFT and (2) TAPT enhances the overall classification performance of adapters and is well-suited for AL scenarios. We also show that (3) AL methods choose fewer unforgettable and more moderately forgettable examples with PEFT and that (4) PEFT produces instance-level representations of early and middle layers that are more similar to the base PLM than FFT. Our results uncover the intricacies of positive interactions between AL, PEFT, and TAPT, providing empirical justification for their combined use in low-resource settings.

\

2 Related Work

Our research involves combining AL with PLMs and investigating the use of PEFT techniques within the confines of low-resource settings.

\ AL with PLMs. Until recently, the conventional approach for integrating PLMs with AL involved performing full fine-tuning with a fixed number of training epochs and training the model from scratch in each AL step (Ein-Dor et al., 2020; Margatina et al., 2021; Shelmanov et al., 2021; Karamcheti et al., 2021; Schröder et al., 2022). However, studies by Mosbach et al. (2021) and Zhang et al. (2021) revealed that fine-tuning in low-resource setups is prone to instability, particularly when training for only a few epochs. This instability, often sensitive to weight initialization and data ordering (Dodge et al., 2020), presents a significant challenge for AL, which frequently operates in lowresource settings. Recent research has looked into the impact of PLM training regimes on AL performance (Grießhaber et al., 2020; Yuan et al., 2020; Yu et al., 2022), suggesting that the choice of training regime is more critical than the choice of the AL method. Notably, TAPT has proven particularly effective in enhancing AL performance (Margatina et al., 2022; Jukic and Šnajder ´ , 2023).

\ Adapters in low-resource settings. Research on adapters in low-resource settings has primarily focused on areas such as cross-lingual transfer for low-resource languages (Ansell et al., 2021; Lee et al., 2022; Parovic et al., 2022), where the emphasis lies on exploring diverse methods of fusing adapters. In monolingual settings with scarce data, adapters have been found to outperform full finetuning (Li and Liang, 2021; Mao et al., 2022). A study by He et al. (2021) demonstrated that adapterbased tuning exhibits enhanced stability and generalization capabilities by virtue of being less sensitive to learning rates than traditional fine-tuning methods. While incorporating task adaptation techniques, such as TAPT, has been shown to match or even improve performance over FFT in lowresource setups, Kim et al. (2021) noted an interesting caveat: the benefits of integrating TAPT with adapters tend to taper off as the amount of data increases.

\ Despite the established effectiveness of adapters in setups with limited resources, their integration into AL frameworks — which frequently face analogous resource constraints — remains an untapped area of research. This gap is particularly notable given that AL’s iterative learning process could significantly benefit from adapters’ parameter efficiency and transferability, especially in scenarios where data scarcity or labeling costs are primary concerns.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[1] Our code is available at https://github.com/josipjukic/adapter-al

Market Opportunity
ArchLoot Logo
ArchLoot Price(AL)
$0.003031
$0.003031$0.003031
-3.59%
USD
ArchLoot (AL) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

BitcoinWorld Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 Are you ready to witness a phenomenon? The world of technology is abuzz with the incredible rise of Lovable AI, a startup that’s not just breaking records but rewriting the rulebook for rapid growth. Imagine creating powerful apps and websites just by speaking to an AI – that’s the magic Lovable brings to the masses. This groundbreaking approach has propelled the company into the spotlight, making it one of the fastest-growing software firms in history. And now, the visionary behind this sensation, co-founder and CEO Anton Osika, is set to share his invaluable insights on the Disrupt Stage at the highly anticipated Bitcoin World Disrupt 2025. If you’re a founder, investor, or tech enthusiast eager to understand the future of innovation, this is an event you cannot afford to miss. Lovable AI’s Meteoric Ascent: Redefining Software Creation In an era where digital transformation is paramount, Lovable AI has emerged as a true game-changer. Its core premise is deceptively simple yet profoundly impactful: democratize software creation. By enabling anyone to build applications and websites through intuitive AI conversations, Lovable is empowering the vast majority of individuals who lack coding skills to transform their ideas into tangible digital products. This mission has resonated globally, leading to unprecedented momentum. The numbers speak for themselves: Achieved an astonishing $100 million Annual Recurring Revenue (ARR) in less than a year. Successfully raised a $200 million Series A funding round, valuing the company at $1.8 billion, led by industry giant Accel. Is currently fielding unsolicited investor offers, pushing its valuation towards an incredible $4 billion. As industry reports suggest, investors are unequivocally “loving Lovable,” and it’s clear why. This isn’t just about impressive financial metrics; it’s about a company that has tapped into a fundamental need, offering a solution that is both innovative and accessible. The rapid scaling of Lovable AI provides a compelling case study for any entrepreneur aiming for similar exponential growth. The Visionary Behind the Hype: Anton Osika’s Journey to Innovation Every groundbreaking company has a driving force, and for Lovable, that force is co-founder and CEO Anton Osika. His journey is as fascinating as his company’s success. A physicist by training, Osika previously contributed to the cutting-edge research at CERN, the European Organization for Nuclear Research. This deep technical background, combined with his entrepreneurial spirit, has been instrumental in Lovable’s rapid ascent. Before Lovable, he honed his skills as a co-founder of Depict.ai and a Founding Engineer at Sana. Based in Stockholm, Osika has masterfully steered Lovable from a nascent idea to a global phenomenon in record time. His leadership embodies a unique blend of profound technical understanding and a keen, consumer-first vision. At Bitcoin World Disrupt 2025, attendees will have the rare opportunity to hear directly from Osika about what it truly takes to build a brand that not only scales at an incredible pace in a fiercely competitive market but also adeptly manages the intense cultural conversations that inevitably accompany such swift and significant success. His insights will be crucial for anyone looking to understand the dynamics of high-growth tech leadership. Unpacking Consumer Tech Innovation at Bitcoin World Disrupt 2025 The 20th anniversary of Bitcoin World is set to be marked by a truly special event: Bitcoin World Disrupt 2025. From October 27–29, Moscone West in San Francisco will transform into the epicenter of innovation, gathering over 10,000 founders, investors, and tech leaders. It’s the ideal platform to explore the future of consumer tech innovation, and Anton Osika’s presence on the Disrupt Stage is a highlight. His session will delve into how Lovable is not just participating in but actively shaping the next wave of consumer-facing technologies. Why is this session particularly relevant for those interested in the future of consumer experiences? Osika’s discussion will go beyond the superficial, offering a deep dive into the strategies that have allowed Lovable to carve out a unique category in a market long thought to be saturated. Attendees will gain a front-row seat to understanding how to identify unmet consumer needs, leverage advanced AI to meet those needs, and build a product that captivates users globally. The event itself promises a rich tapestry of ideas and networking opportunities: For Founders: Sharpen your pitch and connect with potential investors. For Investors: Discover the next breakout startup poised for massive growth. For Innovators: Claim your spot at the forefront of technological advancements. The insights shared regarding consumer tech innovation at this event will be invaluable for anyone looking to navigate the complexities and capitalize on the opportunities within this dynamic sector. Mastering Startup Growth Strategies: A Blueprint for the Future Lovable’s journey isn’t just another startup success story; it’s a meticulously crafted blueprint for effective startup growth strategies in the modern era. Anton Osika’s experience offers a rare glimpse into the practicalities of scaling a business at breakneck speed while maintaining product integrity and managing external pressures. For entrepreneurs and aspiring tech leaders, his talk will serve as a masterclass in several critical areas: Strategy Focus Key Takeaways from Lovable’s Journey Rapid Scaling How to build infrastructure and teams that support exponential user and revenue growth without compromising quality. Product-Market Fit Identifying a significant, underserved market (the 99% who can’t code) and developing a truly innovative solution (AI-powered app creation). Investor Relations Balancing intense investor interest and pressure with a steadfast focus on product development and long-term vision. Category Creation Carving out an entirely new niche by democratizing complex technologies, rather than competing in existing crowded markets. Understanding these startup growth strategies is essential for anyone aiming to build a resilient and impactful consumer experience. Osika’s session will provide actionable insights into how to replicate elements of Lovable’s success, offering guidance on navigating challenges from product development to market penetration and investor management. Conclusion: Seize the Future of Tech The story of Lovable, under the astute leadership of Anton Osika, is a testament to the power of innovative ideas meeting flawless execution. Their remarkable journey from concept to a multi-billion-dollar valuation in record time is a compelling narrative for anyone interested in the future of technology. By democratizing software creation through Lovable AI, they are not just building a company; they are fostering a new generation of creators. His appearance at Bitcoin World Disrupt 2025 is an unmissable opportunity to gain direct insights from a leader who is truly shaping the landscape of consumer tech innovation. Don’t miss this chance to learn about cutting-edge startup growth strategies and secure your front-row seat to the future. Register now and save up to $668 before Regular Bird rates end on September 26. To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 first appeared on BitcoinWorld.
Share
Coinstats2025/09/17 23:40
Polygon’s Giugliano Hardfork Signals a Stability Push After a Rough 2025

Polygon’s Giugliano Hardfork Signals a Stability Push After a Rough 2025

The post Polygon’s Giugliano Hardfork Signals a Stability Push After a Rough 2025 appeared on BitcoinEthereumNews.com. The Polygon Foundation confirmed the Giugliano
Share
BitcoinEthereumNews2026/04/07 13:31
Pi Network Completes First KYC Rewards Distribution

Pi Network Completes First KYC Rewards Distribution

The Pi Network has completed its first KYC validator rewards distribution. This marks an important step in its long-running mainnet rollout. The rewards cover a
Share
Coinfomania2026/04/07 13:22

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!