In the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy. Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims. Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.Grok Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo Let’s start with the basics to set the stage. Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms. gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions. Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article. The EU AI Act: What Open-Source GPAI Models Need to Nail The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits. Key obligations include: Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info. Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs). Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright. The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions. The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href Wrapping It Up: Lessons for xAI and the AI World Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines. If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open. Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below! xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this storyIn the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy. Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims. Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.Grok Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo Let’s start with the basics to set the stage. Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms. gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions. Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article. The EU AI Act: What Open-Source GPAI Models Need to Nail The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits. Key obligations include: Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info. Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs). Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright. The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions. The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href Wrapping It Up: Lessons for xAI and the AI World Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines. If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open. Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below! xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story

xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test?

2025/08/25 23:38
5 min read

In the fast-evolving world of AI, open-sourcing models has become a battleground for innovation, ethics, and regulation. Just recently, on August 25, 2025 (yes, that’s today!), Elon Musk announced that xAI has open-sourced Grok 2.5, its flagship model from last year, making the weights available on Hugging Face. This move echoes OpenAI’s earlier release on August 5, 2025, of two open-source models: gpt-oss-120b (120 billion parameters) and gpt-oss-20b (20 billion parameters), under the permissive Apache 2.0 license with an added usage policy.

Both companies are pushing the boundaries of general-purpose AI (GPAI) models — those versatile systems capable of tackling reasoning, coding, math, and more. But with great power comes great scrutiny, especially under the EU AI Act (Regulation (EU) 2024/1689), which sets strict rules for transparency, risk management, and open-source claims.

Inspired by a recent Medium article analyzing OpenAI’s models (check it out here), I’ll conduct a similar compliance check for Grok 2.5. Using publicly available info like model cards and announcements, we’ll evaluate its alignment with the Act’s requirements for open-source GPAI models. Spoiler: It’s not as straightforward as it seems. Note that this is a high-level analysis — true compliance needs official regulatory review.

Grok

Breaking Down the Models: Grok 2.5 vs. OpenAI’s Duo

Let’s start with the basics to set the stage.

  • Grok 2.5 (xAI): This beast clocks in at around 270 billion parameters, trained back in 2024 on text-based tasks like reasoning. What’s released? The model weights (a hefty ~500 GB across 42 files) and the tokenizer. But details on the full architecture, training code, or datasets? Slim to none — only hints like a June 2024 knowledge cutoff. The license is a custom “Grok 2 Community License Agreement”: revocable, allows commercial and non-commercial use, but slaps on restrictions like banning its use to train or improve other AI models. No separate usage policy, just the license terms.
  • gpt-oss-120b and gpt-oss-20b (OpenAI): Smaller siblings at 120B and 20B parameters, trained on trillions of filtered tokens with chain-of-thought tweaks. Releases include weights, architecture details, tokenizer (via TikToken), and even some training code snippets. Licensed under Apache 2.0 — super permissive — with a usage policy encouraging responsible AI without heavy restrictions.

Training compute (measured in FLOPs) isn’t explicitly shared for Grok 2.5, but given its size, it’s probably under the 10²⁵ FLOPs mark that triggers “systemic risk” status — similar to GPT-3’s estimates (around 3.14 x 10²³ FLOPs). OpenAI’s models are in the same boat, as noted in the original article.

The EU AI Act: What Open-Source GPAI Models Need to Nail

The EU AI Act classifies GPAI as AI that handles diverse tasks without a narrow focus (Article 3, point 63). For “open-source” ones (Recital 102, Article 3 point 12), the bar is high: They must use a “free and open license” allowing unrestricted access, use, study, modification, and sharing — including derivatives — with no commercial bans or field limits.

Key obligations include:

  • Article 53 (Baseline for All GPAI Providers): Technical docs on training/testing, copyright respect (like honoring opt-outs from Directive 2019/790), and usage info.
  • Article 55 (For Systemic Risks): If over 10²⁵ FLOPs or deemed high-risk, add risk assessments, testing, and reporting. Open-source models can snag exemptions from some transparency rules if their license is truly open and they’re not monetized (e.g., no subs).
  • Exemptions and the Code of Practice: Genuine open-source (non-systemic) skips some hurdles if the license is barrier-free. The EU’s upcoming Code of Practice (rolling out in 2025) offers a voluntary roadmap for safety, transparency, and copyright.

The Act loves open-source for sparking innovation but calls out licenses with sneaky restrictions.

The Compliance Breakdown: Where Grok 2.5 Stands vs. OpenAI

Mirroring the original article’s style, here’s a table assessing compliance based on public data. Ratings: “Likely Compliant” (checks out), “Partial/Questionable” (iffy spots), or “Potential Non-Compliance” (red flags). I’ve included OpenAI for direct comparison.

https://medium.com/media/8218eea850b241bff8bd2a4f09d44233/href

Wrapping It Up: Lessons for xAI and the AI World

Grok 2.5 ticks some boxes as a GPAI release but stumbles on open-source purity thanks to its custom license’s revocability and restrictions — potentially stripping away exemptions and inviting deeper EU scrutiny under Articles 53 and 55. OpenAI’s gpt-oss models, with their straightforward Apache 2.0 setup and better docs, seem to sail through more smoothly, qualifying for those sweet exemptions while hitting baselines.

If Grok 2.5’s FLOPs secretly top 10²⁵ (doubtful, but possible), the gaps widen. xAI could level up by switching to a standard open license and beefing up transparency. For anyone in AI, this highlights the Act’s push: Open-source is great, but only if it’s truly open.

Curious about the EU’s full guidelines? Dive into them or chat with regulators for the real deal. What do you think — will more companies follow suit, or tighten up? Drop your thoughts below!


xAI’s Grok 2.5: Open-Sourced, But Does It Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Market Opportunity
Xai Logo
Xai Price(XAI)
$0.009648
$0.009648$0.009648
+0.76%
USD
Xai (XAI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

North America Sees $2.3T in Crypto

North America Sees $2.3T in Crypto

The post North America Sees $2.3T in Crypto appeared on BitcoinEthereumNews.com. Key Notes North America received $2.3 trillion in crypto value between July 2024 and June 2025, representing 26% of global activity. Tokenized U.S. treasuries saw assets under management (AUM) grow from $2 billion to over $7 billion in the last twelve months. U.S.-listed Bitcoin ETFs now account for over $120 billion in AUM, signaling strong institutional demand for the asset. . North America has established itself as a major center for cryptocurrency activity, with significant transaction volumes recorded over the past year. The region’s growth highlights an increasing institutional and retail interest in digital assets, particularly within the United States. According to a new report from blockchain analytics firm Chainalysis published on September 17, North America received $2.3 trillion in cryptocurrency value between July 2024 and June 2025. This volume represents 26% of all global transaction activity during that period. The report suggests this activity was influenced by a more favorable regulatory outlook and institutional trading strategies. A peak in monthly value was recorded in December 2024, when an estimated $244 billion was transferred in a single month. ETFs and Tokenization Drive Adoption The rise of spot Bitcoin BTC $115 760 24h volatility: 0.5% Market cap: $2.30 T Vol. 24h: $43.60 B ETFs has been a significant factor in the market’s expansion. U.S.-listed Bitcoin ETFs now hold over $120 billion in assets under management (AUM), making up a large portion of the roughly $180 billion held globally. The strong demand is reflected in a recent resumption of inflows, although the products are not without their detractors, with author Robert Kiyosaki calling ETFs “for losers.” The market for tokenized real-world assets also saw notable growth. While funds holding tokenized U.S. treasuries expanded their AUM from approximately $2 billion to more than $7 billion, the trend is expanding into other asset classes.…
Share
BitcoinEthereumNews2025/09/18 02:07
The Critical Path To A Potential $10k Milestone

The Critical Path To A Potential $10k Milestone

The post The Critical Path To A Potential $10k Milestone appeared on BitcoinEthereumNews.com. Ethereum Price Prediction 2026-2030: The Critical Path To A Potential
Share
BitcoinEthereumNews2026/02/27 14:40
Priced Below $0.003, Google’s AI Says This is the Most Promising Crypto in 2025, Beating Solana (SOL)

Priced Below $0.003, Google’s AI Says This is the Most Promising Crypto in 2025, Beating Solana (SOL)

The post Priced Below $0.003, Google’s AI Says This is the Most Promising Crypto in 2025, Beating Solana (SOL) appeared on BitcoinEthereumNews.com. Little Pepe ($LILPEPE) may be the next cryptocurrency that investors are looking for to compete with Solana (SOL) and Ethereum (ETH). Google’s AI models say it’s the best choice for 2025. This meme-powered Layer 2 blockchain is currently in Stage 12 of its presale, with a cost of $0.0021. Traders, analysts, and meme coin fans are all interested in it. A Presale That’s Almost Sold Out Momentum for Little Pepe is undeniable. At the time of writing: Stage 12 Price: $0.0021 (Next Stage: $0.0022) USD Raised: $25.3 million / $25.4 million Tokens Sold: 15,692,215,448 / 15,750,000,000 Completion: 99.63% With only a fraction of tokens left before advancing to the next stage, early investors are racing to secure their positions. Once the presale ends, $LILPEPE will list on two major centralized exchanges (CEX) at launch, followed by listings on top decentralized exchanges with deep liquidity support. What is Unique about Little Pepe? Little Pepe is the world’s first Layer 2 blockchain, designed specifically for meme coins, offering a dedicated ecosystem where speed, security, and ultra-low fees are core component. Ultra-Fast & Cheap Transactions: Built to outpace Ethereum and even Solana in cost-efficiency. No Sniper Bots: Designed to keep trading fair and free from predatory bots. Utility-Powered Ecosystem: $LILPEPE is the lifeblood of the chain, powering everything from transfers to staking and participation on the launchpad. Zero Tax Policy: True DeFi freedom—no hidden buy/sell taxes. Little Pepe positions itself as a meme icon and an unstoppable kingdom for meme coin culture, where Pepe reigns supreme and innovation meets fun. Security First: The CertiK Audit Trust is critical in DeFi, and Little Pepe has taken steps to ensure investors feel secure. The project recently completed a CertiK audit, one of the industry’s gold standards for blockchain security. Audit Score: 95.49% Coverage Areas: Smart…
Share
BitcoinEthereumNews2025/09/19 05:40