SAN FRANCISCO, Feb. 5, 2026 /PRNewswire/ — Today, Goodfire—the AI research lab using interpretability to understand, learn from, and design models—announced a $SAN FRANCISCO, Feb. 5, 2026 /PRNewswire/ — Today, Goodfire—the AI research lab using interpretability to understand, learn from, and design models—announced a $

AI Lab Goodfire Raises $150M at $1.25B Valuation to Design Models with Interpretability

5 min read

SAN FRANCISCO, Feb. 5, 2026 /PRNewswire/ — Today, Goodfire—the AI research lab using interpretability to understand, learn from, and design models—announced a $150 million Series B funding round at a $1.25 billion valuation. The round was led by B Capital, with participation from existing investors Juniper Ventures, Menlo Ventures, Lightspeed Venture Partners, South Park Commons, and Wing Venture Capital, and new investors DFJ Growth, Salesforce Ventures, Eric Schmidt, and others. This funding, coming less than a year after its Series A, will enable Goodfire to advance frontier research initiatives, build the next generation of its core product, and scale partnerships across AI agents and life sciences.

Interpretability is the science of how neural networks work internally, and how modifying their inner mechanisms can shape their behavior—e.g., adjusting a reasoning model’s internal concepts to change how it thinks and responds. Interpretability also enables AI-to-human knowledge transfer, i.e., extracting novel insights from powerful AI models. Goodfire recently identified a novel class of Alzheimer’s biomarkers in this way, by applying interpretability techniques to an epigenetic model built by Prima Mente—the first major finding in the natural sciences obtained from reverse-engineering a foundation model.

“We are building the most consequential technology of our time without a true understanding of how to design models that do what we want,” said Yan-David “Yanda” Erlich, former COO and CRO at Weights & Biases and General Partner at B Capital. “At Weights & Biases, I watched thousands of ML teams struggle with the same fundamental problem: they could track their experiments and monitor their models, but they couldn’t truly understand why their models behaved the way they did. Bridging that gap is the next frontier. Goodfire is unlocking the ability to truly steer what models learn, make them safer and more useful, and extract the vast knowledge they contain.”

Most companies building AI models today build their models as black boxes. Goodfire believes that that approach means that society is currently flying blind—and that deeply understanding how models work “under the hood” is critical to building and deploying safe, powerful AI systems. The company is pursuing research which turns AI into something that can be understood, debugged, and intentionally designed like written software.

“Interpretability, for us, is the toolset for a new domain of science: a way to form hypotheses, run experiments, and ultimately design intelligence rather than stumbling into it,” explained Goodfire CEO Eric Ho. “Every engineering discipline has been gated by fundamental science—like steam engines before thermodynamics—and AI is at that inflection point now.”

Goodfire is part of an emerging cadre of research-first “neolabs”—AI companies which are pursuing new breakthroughs in training models which have been neglected by “scaling labs” such as OpenAI and Google DeepMind.

So far, the company has shown the value of their interpretability-driven approach across two key domains: scientific discovery and model design.

On the scientific discovery front, Goodfire has focused on deciphering scientific foundation models with partners like Mayo Clinic, Arc Institute, and Prima Mente, exemplified by their identification of a new class of biomarkers for Alzheimer’s detection. Because AI models already surpass human understanding in many scientific domains, like materials discovery and protein folding, studying how those models work can extract novel insights and expand the horizons of human knowledge. The company plans to continue scaling its pipeline for scientific discovery with new collaborators.

On the model design front, Goodfire has focused on teaching models directly through their internal mechanisms. The company has recently developed methods to efficiently retrain a model’s behavior by precisely targeting parts of its inner workings. One application of these methods reduced hallucinations by half in a large language model. Goodfire is betting that this approach will underpin a paradigm shift in how AI is built, where AI can be made far more reliable and people can precisely and efficiently dictate how models should behave without off-target effects.

The new funding will support Goodfire’s work to rethink training and build a “model design environment”—a platform for understanding, debugging, and intentionally designing AI models at scale. The platform will leverage frontier interpretability techniques to allow users to reach inside models, identify the parts responsible for behaviors they want to change, and specifically train or intervene on those subunits.

The company also plans to continue its green-field research into fundamental model understanding and new interpretability methods.

Goodfire’s team comprises top AI researchers from DeepMind and OpenAI, leading academics from Harvard, Stanford and more, and top ML engineering talent from OpenAI and Google. The team includes Nick Cammarata, a core contributor to the seminal interpretability team at OpenAI, co-founder Tom McGrath, who founded the interpretability team at Google DeepMind, and Leon Bergen, a professor at UC San Diego (on leave).

About Goodfire

Goodfire is a research company and public benefit corporation based in San Francisco, dedicated to using interpretability to understand, learn from, and design AI systems. Our mission is to build the next generation of safe and powerful AI—not by scaling alone, but by understanding the intelligence we’re building. Our goal is to make AI that can be understood, debugged, and shaped like software. Our team shaped modern neural network interpretability at OpenAI, DeepMind, Stanford, and Harvard. We’re backed by over $200M from B Capital, Menlo Ventures, Lightspeed, Eric Schmidt, and others.

Learn more at goodfire.ai and x.com/GoodfireAI.

About B Capital

B Capital invests globally in extraordinary founders and businesses shaping the future through technology. With more than $9 billion in assets under management and dedicated stage-based funds, the firm focuses on seed to early- and late-stage venture growth investments, primarily in the technology, healthcare and energy tech sectors. Founded in 2015, B Capital has an integrated, global team across nine locations in the U.S. and Asia. The firm’s value-add platform, together with the consulting expertise of its strategic partner, The Boston Consulting Group, provides entrepreneurs with the tools and resources to scale quickly and efficiently, expand into new markets and build market-leading businesses. For more information, click here.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/ai-lab-goodfire-raises-150m-at-1-25b-valuation-to-design-models-with-interpretability-302680120.html

SOURCE Goodfire

Market Opportunity
Ucan fix life in1day Logo
Ucan fix life in1day Price(1)
$0.0004492
$0.0004492$0.0004492
-17.04%
USD
Ucan fix life in1day (1) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Vitalik Buterin Reveals Ethereum’s (ETH) Future Plans – Here’s What’s Planned

Vitalik Buterin Reveals Ethereum’s (ETH) Future Plans – Here’s What’s Planned

The post Vitalik Buterin Reveals Ethereum’s (ETH) Future Plans – Here’s What’s Planned appeared on BitcoinEthereumNews.com. Ethereum founder Vitalik Buterin presented the network’s new roadmap, which includes its short-, medium-, and long-term goals, at the Developer Conference held in Japan today. Scalability, cross-layer compatibility, privacy, and security were the prominent topics in Buterin’s speech. Buterin stated that the short-term focus will be on increasing gas limits on the Ethereum mainnet (L1). He said that tools such as block-level access lists, ZK-EVMs, gas price restructuring, and slot optimization will be used in this context. The goal is to maintain the network’s decentralization while increasing scalability. The medium-term goal is to enable trustless asset transfers between Layer-2 (L2) networks and achieve faster transaction finality. In this context, “Stage 2 Rollup” solutions, proof-of-conduct combinations, and optimizations for reading data from L1 are on the agenda. Furthermore, network optimizations such as shortening slot times, fast finality protocols, and erasure coding are planned to improve user experience and security. Buterin emphasized that privacy is a priority for both the short and medium term. Zero-knowledge (ZK) proofs, anonymous pools, encrypted voting, and scrambling network solutions are highlighted to protect the privacy of users’ on-chain payments, voting, DeFi transactions, and account changes. Furthermore, secure execution environments, secret query techniques, and the ability to conceal fraudulent requests and data access patterns are also targeted when reading data from the chain. Buterin’s long-term vision highlights a minimalist, secure, and simple Ethereum. This roadmap includes resistance to the risks posed by quantum computers, securing the protocol with mathematical methods (formal verification), and transitioning to ideal cryptographic solutions. Buterin stated that these strategic steps will transform Ethereum into a more scalable, user-friendly, and secure infrastructure. With the strengthening of L2 networks, more users will be able to use Ethereum with less trust assumptions. The ultimate goal is for Ethereum to become a reliable foundational infrastructure for global…
Share
BitcoinEthereumNews2025/09/18 15:57
SON DAKİKA: Kara Gecede Sürpriz Altcoin İçin Spot ETF Başvurusu Geldi!

SON DAKİKA: Kara Gecede Sürpriz Altcoin İçin Spot ETF Başvurusu Geldi!

Son dakika bilgisine göre, büyük düşüşlerin yaşandığı şu dakikalarda Bitwise, Uniswap (UNI) spot ETF için S-1 başvurusunda bulundu. UNI, son bir ay içerisinde yaklaşık
Share
Coinstats2026/02/06 06:03
Why a Bloomberg Analyst Thinks Bitcoin Could Still Fall Toward $10,000

Why a Bloomberg Analyst Thinks Bitcoin Could Still Fall Toward $10,000

The post Why a Bloomberg Analyst Thinks Bitcoin Could Still Fall Toward $10,000 appeared on BitcoinEthereumNews.com. Bitcoin broke below $71,000, triggering heavy
Share
BitcoinEthereumNews2026/02/06 05:59