Adaption, an AI startup founded by former Cohere Vice President of Research Sara Hooker, has introduced a new system called AutoScientist, designed to automate the process of tailoring AI models to specific tasks by jointly optimising both training data and learning configurations. The system is positioned as a step toward automating AI research and development workflows, with the aim of reducing the manual effort typically required in model fine-tuning and experimentation.
AutoScientist is described as an end-to-end framework that co-optimises datasets and training recipes simultaneously, iterating through a closed loop in which both data selection and model training parameters are continuously adjusted. The process is intended to continue until performance stabilises around a defined objective, effectively allowing the system to refine both what the model learns from and how it learns it without constant human intervention.
According to the company, the tool is intended to reduce the time required to move from an initial concept to a deployed, customised model, potentially compressing development cycles from weeks to hours. It is also presented as a mechanism that broadens access to model customisation beyond machine learning specialists, enabling users without deep technical expertise to influence not only prompts but also the underlying behaviour of trained systems. The approach is framed as particularly relevant for organisations seeking to fine-tune models for domain-specific language, structured outputs, or efficiency constraints such as latency and cost, while leveraging proprietary datasets more effectively within AI systems.
Internal evaluations referenced by the company suggest that AutoScientist demonstrates improved performance compared with baseline models across a range of dataset sizes between 5,000 and 100,000 examples, as well as across multiple model architectures available for fine-tuning. Reported results indicate consistent gains regardless of domain, with performance measured using in-house evaluations tailored to specific vertical applications.
Further comparisons presented in the evaluation framework indicate that AutoScientist achieved higher average performance than configurations designed by human researchers, including experienced AI engineering staff. In these tests, human experts selected training setups based on their knowledge of model architecture, dataset characteristics, and domain requirements, while AutoScientist was given the same inputs along with the ability to iteratively refine its own configurations using historical run data. Under these conditions, aggregate outcomes reportedly improved from 48 percent to 64 percent when using the automated system, with an average performance uplift of approximately 35 percent across experiments.
Additional benchmarking across multiple application areas suggests that the system is not strongly sensitive to specific domains, with gains observed across eight different verticals. The company reports that this consistency is notable given that many traditional fine-tuning approaches tend to underperform outside narrow or highly curated settings, whereas AutoScientist reportedly delivers more stable improvements across varied tasks and datasets.
The system is positioned as part of a broader effort to automate model development processes, particularly in areas involving long-horizon reasoning, which remains a persistent challenge in AI reliability. The developers indicate that AutoScientist represents an early step toward reducing the need for manual intervention in model training pipelines, with future research directions focused on enabling more immediate forms of adaptation that may not require traditional training cycles.
Alongside its technical objectives, the release is also framed as an effort to broaden access to model customisation, allowing a wider range of users to shape AI systems for specific applications. The tool is being made available free of charge for an initial 30-day period. The broader aim, according to the framing provided, is to reduce barriers to AI model development and expand the ability to create tailored systems beyond a small group of specialised researchers concentrated in major laboratories.
A key contextual argument highlighted in the announcement is that only a small number of people globally possess the expertise required to properly train and fine-tune frontier AI models, with most of this knowledge concentrated within a limited number of major research laboratories. It is suggested that if a system such as AutoScientist is able to successfully automate aspects of this expertise, the process of building customised models for individual organisations and specific use cases could become more accessible and practically achievable.
The post Adaption’s AutoScientist Automates Model Fine-Tuning With Closed-Loop Training Outperforming Human-Designed Configurations appeared first on Metaverse Post.


