The main constraint on AI-assisted development was not model capability but how context was structured and exposed.The main constraint on AI-assisted development was not model capability but how context was structured and exposed.

Lessons From Hands-on Research on High-Velocity AI Development

2025/12/12 13:57
11 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

The Gap Between AI Hype and Production Reality

In the last two years, SWE-bench verified performance jumped from 4.4% to over 70%; frontier models routinely solve repository-scale tasks that were unthinkable in 2023. Dario Amodei in Sept, 2025 stated “70, 80, 90% of the code written in Anthropic is written by Claude”.

Industry surveys 2025 DORA Report that ~90% of software professionals now integrate AI into their workflows, with a majority describing their reliance as “heavy.” At the same time, marketing copy has jumped straight to “weekend apps” and “coding is over.” But when you look closely at production teams, the story is messier.

Rigorous studies find that many teams experience slowdowns, not speedups, once they try to push LLMs into big codebases: context loss, tooling friction, evaluation overhead, and review bottlenecks erode the promised gains and increase task completion time by 19%. Repository-scale benchmarks show that prompt techniques boost performance by ~19% only on simple tasks, but that translates into ~1–2% improvements at real-world repo scale.

In parallel, research on long-context is very clear on the “context window paradox”: as you increase context length, performance often drops, sometimes by 13–85%, even when the model has perfect access to all relevant information. In another large study, models fell from ~95% to ~60–70% accuracy when exposed to long inputs mixing relevant tokens with realistic distractors. Larger apertures without structure amplify noise, not signal.

So:

  • Models are strong enough.
  • Usage has gone mainstream.
  • But system-level productivity is inconsistent at best.

That raises a few concrete questions:

  • How do I keep the model grounded in the right project context as the repo grows?
  • How do I stop long sessions from drifting into hallucinations and dead branches?
  • How do I get velocity without creating an unmaintainable pile of AI-generated sludge?

After spending weeks researching what's true and what's false, the conclusion was that it is not more model capability (at least it's not for now) or prompting tricks that make the big difference, but it is systematic context engineering and system design. Context Engineering. Humanlayer. (2025). \n

Our approach to Systematic Context Engineering

At this point, we had an intuition of what to do, but the question was how. We had to synthesize all these long nights of readings, so we came up with a four-optimization areas to build a production platform from the ground up:

\

  1. Low-overhead, AI-optimized development environment selection.
  2. Prompting as an “Operational Discipline” and session management.
  3. Systematic context engineering via the two-layer architecture (a static rule layer and a repo MCP for dynamic context )
  4. Agentic orchestration over that shared context substrate (1-3)

Rather than arguing from theory alone, over the span of 15 weeks on a part-time basis (nights and weekends), we researched and developed this architecture in an internal production-ready artifact, ~220k of clean code, 78 features. That became our lab and playground. \n \n Below is everything that we learned, organized into 4 areas.

Area 1: A Zero-Friction, AI-Optimized Stack

The first step is deliberately unglamorous: stop burning cycles on infrastructure no model can help you with. The logic is straightforward:

So the stack was curated with three criteria:

  1. LLM-friendly documentation & APIs.
  2. Minimal configuration and operational overhead.
  3. Fast feedback cycles (builds, previews, tests).

The tech stack, of course, shifts based on what you are building but concretely, for this experiment, it meant:

  • Next.js + React + Expo for unified web + native development.
  • Supabase + Vercel for managed Postgres, auth, storage, and CI/CD with git-driven deploys.
  • Full-stack TypeScript + Zod for end-to-end types that LLMs can reliably refactor.
  • Tailwind CSS + shadcn/ui for composable UI mapped into a component library agents can reuse.
  • LangChain, LangGraph, and n8n for orchestration and automation.
  • Playwright (with an MCP bridge) for E2E tests that agents can see and operate.

Many modern AI-native platforms and vibe coding platforms converge on similar stacks (Expo is used by Bolt.new and Replit, Tailwind CSS and shadcn/ui is used by Lovable and v0, Supabase is used by Lovable). The point is not the specific logos; it’s the principle: pick tools that shift labor up the stack, where models can actually help, and remove as much bespoke infra as possible.


Area 2: Prompting as an Operational Discipline

Step 2 translates prompt engineering research into repeatable operational practices in our AI-native IDE (Cursor).

The key move is to treat prompts as structured artifacts, not ad-hoc chat messages. In practice, that translated into:

  • Foundation of Semantic Context.
  • Activate Full-Repository Indexing: The cornerstone of the workflow is enabling Cursor's codebase indexing.
  • Enhance Retrieval with Meta To guide the retrieval process, annotate key files with simple, machine-readable tags in the code. A comment like // @module:authentication provides a strong signal to the RAG system.
  • **Integrate External Knowledge Sources:**LLMs knowledge can be outdated, using Cursor's @docs command. \n
  • Task-scoped sessions & atomic git. Every feature or bug gets its own AI session; no mixing unrelated work in the same context. This limits drift and “context rot.” \n
  • Explicit multi-factor prompts. Each prompt encodes:
  • user stories and business goals,
  • precise UI/UX descriptions,
  • technical constraints anchored in the step-1 (stack),
  • relevant data models and schemas,
  • and a testing/validation plan. \n
  • Tools orchestration
  • Target UI Development with Precision: When working on the frontend, we use Chrome / Cursor Browser MCP’s "inspect" feature.
  • UI/UX Design partner: The design process followed a component-first (See more on the Step 3), system-driven approach partially inspired by 21st.dev (great resource for LLM-ready UI) and other open source libraries.
  • Activate Programmatic Context via MCPs: Although the model can activate MCPs automatically, explicit invocation improves consistency and result quality.
  • Apply Weighted Governance to Agent Actions: AI autonomy is governed through Cursor’s task manager to balance speed and safety. High-risk actions (e.g., database migrations, dependency installs) should be set as an “Ask Everytime” human checkpoint, while low-risk tasks (e.g., code refactoring) use an allowlist for faster automation.
  • Define and Access Cursor Commands. [.cursor/ commands] Cursor Commands are structured, predeclared task shortcuts that encapsulate complex developer intentions as reusable directives.
  • Use Cursor Command Queue for Session Autonomy: long development sessions often exceed a single context window. Cursor’s command queue feature allows the agent to stack and execute sequenced actions (e.g., “refactor → run tests →create docs →commit ”) without continuous manual prompting. \n
  • Verification loops and auto-debugging: this turned out to be a huge timesaver. The agent is instructed to place targeted diagnostics and verification loops, leveraging terminal logs & Browser MCP or Playwright MCP, and then reason over them.

Cursor’s rules, agent configuration, and MCP integration serve as the execution surface for these workflows.


Area 3: Two-Layer Context Engineering

We came up with a static layer and a dynamic one.

Layer 1: Declarative Rulebook (.cursor/rules)

The first layer is a static rulebook that encodes non-negotiable project invariants in a machine-readable way. These rules cover:

  • Repository structure and module boundaries,
  • Security expectations and auth patterns,
  • Supabase (the used BaaS) and API usage constraints,
  • UI conventions and design tokens,
  • Hygiene rules (atomic commits, linting, code clean up, constraints on destructive commands – to avoid LLMs nuke your DB –).

In Cursor, rules can be:

  • Always-on for global invariants,
  • Auto-loaded for specific paths or file types,
  • or manually referenced in prompts.

Conceptually, this is the “constitution layer”: what must always hold, regardless of the local task. Instead of hoping the model “remembers” the conventions, you formalize them only once as a shared constraint surface.

Layer 2: Native Repo MCP

The second layer is a Native Repo MCP server that exposes live project structure as tools, resources, and prompts (so you don’t write the same thing over and over).

Of course, you can personalize this based on your repo/project. For our experiment, the MCP modules included:

  • A Database module: schemas, relations, constraints, and functions from Supabase / Postgres.
  • A Components module: a searchable catalog of UI components, props, and usage patterns.
  • A Migrations & Scripts module: migration history, code-quality scripts, and operational scripts.
  • A Prompts module: reusable templates for common tasks (schema changes, code-quality passes, architectural reviews).

Instead of front-loading giant blobs of text into every prompt, the IDE + MCP gives the agent an API over the codebase and metadata. The model requests intelligently on its own what it needs via tools and resources.

This addresses several failure modes identified in long-context research:

  • It controls signal-to-noise by narrowing every retrieval to a targeted query instead of a full-repo dump.
  • It mitigates “lost in the middle” and context rot by fetching relevant fragments on demand rather than packing everything into a single monolithic window.
  • It reduces representation drift by anchoring the model on canonical schemas and rules whenever it acts.

Overall, it gives the IDE more space for autonomy within the sessions and repo boundaries.

Activation via Specification Templates

A static/dynamic context substrate is only useful if you activate it consistently. The final part of Step 3 is a set of specification templates within the MCP repo that:

  • pull from the rulebook and MCP,
  • encode business and technical intent,
  • and provide the shared basis for planning, implementation, and validation.

MCP’s three primitives (tools, resources, prompts) are used to standardize this activation: the agent can list capabilities, fetch structured metadata, and then apply prompt templates.

(Note that frameworks like spec-kit (with its constitution.md and /speckit.specify flow) and Archon (as a vector-backed context hub) offer alternatives to this static:dynamic pattern.)


Area 4: Agentic Orchestration on a Shared Context Substrate

The fourth step moves from “single AI assistant” to orchestrated agents & resources, and it reuses the same context architecture built in Area 3 and dynamics built in Area 1-2. The agents use the static–dynamic context as shared infrastructure at the system level.

Hierarchical, Role-Based Agents with Validation Gates

Agent orchestration follows role specialization and hierarchical delegation:

  • A Codebase Analyst agent for structural understanding.
  • A Research agent for external documentation and competitive intelligence.
  • A Validator / QA agent for tests, linting, and policy checks.
  • A Maintenance / Cleanup agent that runs after features land to remove dead code and reduce contextual noise.

This is paired with a strict two-phase flow:

Phase 1 – Agentic planning & specification. \ Planning agents, guided by AGENTS.md (“README for AI agents”), produce a central, machine-readable spec. Similar to Product Requirement Prompts (PRPs), this artifact bundles product requirements with curated contextual intelligence. It encodes not just what to build, but how it will be validated. Cursor’s Planning mode turned out to be a strong baseline assistant, and the following tools proved consistently useful alongside it for inspiration and execution:

  • github/spec-kit: A GitHub toolkit for spec-driven development
  • eyaltoledano/claude-task-master: Task orchestration system for Claude-based AI development
  • Wirasm/PRPs-agentic-eng: Framework using Product Requirement Prompts (PRPs)
  • coleam00/Archon: Centralized command-center framework for coordinating multi-agent systems, context, and knowledge management.

Phase 2 – Context-engineered implementation. Implementation agents operate under that spec, with the rulebook and MCP providing guardrails and live project knowledge. Execution is governed by a task manager that applies weighted autonomy: high-risk actions (e.g., migrations, auth changes) require human checkpoints; low-risk refactors can run with more freedom.

Crucially, validation is embedded:

  • Specifications include explicit validation loops: which linters, tests, and commands the agent must run before declaring a task complete.

  • Risk analysis upfront (drawing on tools like TaskMaster’s complexity assessment) informs task splitting and prioritization.

  • Consistency checks detect mismatches between specs, plans, and code before they harden into defects.

    \

That split is not a universal constant, but to us it seems a concrete pattern: agents absorb routine implementation (coding); senior software engineers concentrate on design and risk.

\


Where this approach Fits (and Where It Doesn’t)

This is a single experiment with limitations:

  • It is a single SaaS project in TypeScript, on an AI-friendly stack, with a two-person team. \n
  • It does not validate generalization to embedded systems, data engineering stacks, heavily distributed microservices, or legacy technology bases.

The broader best practices (Area 2) and the following design principles, however, are portable:

Separate stable constraints from the dynamic state. Encode invariants in a rulebook; expose current reality via a programmatic context layer.

Expose context via queries, not monolithic dumps. Give agents MCP-style APIs into schemas, components, and scripts rather than feeding them full repositories.

Standardize context at the system level. Treat static + dynamic context as shared infra for all agents, not as per-prompt basis. \n Front-load context engineering where the payoff amortizes.For multi-sprint or multi-month initiatives, the fixed cost of building rulebooks and MCP servers pays back over thousands of interactions. For a weekend project, it likely does not. \n


Main takeaway

In this experiment, the main constraint on AI-assisted development was not model capability but how context was structured and exposed. The biggest gains came from stacking four choices: an AI-friendly, low-friction stack; prompts treated as operational artifacts; a two-layer context substrate (rules + repo MCP); and agent orchestration with explicit validation and governance.

This is not a universal recipe. But the pattern that emerged is that: if you treat context as a first-class architectural concern rather than a “side effect” of a single chat/session, agents can credibly handle most routine implementations while humans focus on system design and risk.

Disclosure: this space moves fast enough that some specifics may age poorly, but the underlying principles should remain valid.

\n \n

\

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Former BlackRock Executive Joseph Chalom: How will Ethereum reshape the global financial system?

Former BlackRock Executive Joseph Chalom: How will Ethereum reshape the global financial system?

Ex-BlackRock Exec: Why Ethereum Will Reshape Global Finance | Joseph Chalom Guest: Joseph Chalom, Co-CEO of SharpLink and former BlackRock executive Moderator: Chris Perkins, CEO of CoinFund Podcast Date: September 10 Compiled and edited by LenaXin Editor's Summary This article is compiled from the Wealthion podcast, where we invite SharpLink co-founder and former BlackRock executive Joseph Chalom and CoinFund President Chris Perkins to discuss how the tokenization of real-world assets, rigorous risk management, and large-scale intergenerational wealth transfer can put trillions of dollars on the Ethereum track. Why Ethereum could become one of the most strategic assets of the next decade? Why DATs offer a smarter, higher-yielding, and more transparent way to invest in Ethereum ChainCatcher did the collating and compilation. Summary of highlights My focus has always been on building a bridge between traditional finance and digital assets, and upholding my principles while raising industry standards. Holding ETH indirectly through holding public shares listed on Nasdaq has its unique advantages. It is necessary to avoid raising funds when there is actual dilution of shareholder equity. You should wait until the multiple recovers before raising funds, purchasing ETH and staking. The biggest risk today is no longer regulation, but how we behave and the kinds of risks we are willing to take in pursuit of returns. A small, focused team can achieve significant results by doing just a few key things. If you can earn ETH through business operations, it will form a powerful growth flywheel. I hope that in a year and a half, we can establish one or two companies that support the closed loop of transactions in the Ethereum ecosystem and generate revenue denominated in ETH, thus forming a virtuous circle. The current global financial system is highly fragmented: assets such as stocks and bonds are limited to trading in specific locations, lack interoperability, and each transaction usually requires transfer through fiat currency. (I) From BlackRock to Blockchain: Joseph’s Financial Journey Chris Perkins: Could you tell us about your background? Joseph Chalom: I've only been CEO of SharpLink for five weeks, but my story goes far beyond that. Before coming here, I spent a full twenty years at BlackRock. For the first decade or so, I was deeply involved in the expansion of BlackRock's Aladdin fintech platform. This experience taught me how to drive business growth and identify pain points within the business ecosystem. My last five years at BlackRock have been particularly memorable: I led a vibrant and elite team to explore the new field of digital assets. I was born into an immigrant family and grew up in Washington, D.C. I came to New York 31 years ago, and the energy of this city still drives me forward. Chris Perkins: You surprised everyone by coming back after retirement. Joseph Chalom: I didn't jump directly from BlackRock to Sharplink. I officially retired with a generous compensation package. I was planning to relax and unwind, but then I got a surprise call. My life seems to have always intersected with Joe Rubin's. We talk about mission legacy, and it sounds cliché, but who isn’t striving to leave a mark? My focus has always been on building a bridge between traditional finance and digital assets, upholding my principles while raising industry standards. When I learned that a digital asset vault project needed a leader, I was initially cautious. But the expertise of ConsenSys, Joe’s board involvement, and the project’s potential to help Sharplink stand out ultimately convinced me, and so my short retirement came to an end. Ideally, everyone would have had a few months to reflect on the situation. However, the market was undergoing a critical turning point at the time. It wasn't a battle between Bitcoin and Ethereum, but rather Ethereum was entering its own era and should not be assigned the same risk attributes as Bitcoin. Frankly, I oppose irrational market bias. All assets have value in a portfolio. My decision to re-enter the market stems from my unwavering belief in Ethereum's long-term opportunities. 2. Why Ethereum is a core bet Chris Perkins: Can you talk about how you understand DATS and the promise of Ethereum? Joseph Chalom: If we believe that the financial services industry is going to go through a structural reshaping that will last for a decade or even decades, and you are not looking for short-term trading or speculation but long-term investment opportunities, then the key question is where can you have the greatest impact? There are many ways to hold ETH. Many choose to hold it in spot form, or store it in a self-custodial wallet or custodian institution. Some institutions also prefer ETF products. Of course, each method has certain limitations and risks . Indirectly holding ETH through holding public shares listed on Nasdaq has its unique advantages. Furthermore, by wrapping your equity in a publicly traded company, you not only capture the growth of ETH itself—its price has risen significantly over the past few months—but also earn staking returns. Holding shares in publicly traded companies often carries the potential for multiple increases in value. If you believe in the company's growth potential, this approach can yield significantly higher returns over the long term than simply holding ETH. Therefore, the logical order is very clear. First, you must be convinced that Ethereum contains long-term opportunities; secondly, you can choose what tools to use to hold it. (3) Promoting the growth of net assets per share: What is the driving force of the model? Chris Perkins: In driving MNAV growth, how do you balance financial operations, timely share issuance to increase earnings per share, with truly improving fundamentals and potential returns? Joseph Chalom: I think there are two complementary elements. The first is how to raise funds in a value-added manner . Most fund management companies currently raise funds mainly through issuing stocks. Issuing equity when the share price is higher than the underlying asset's net asset value (NAV) is a method of raising capital using a NAV multiple. At this point, the enterprise's value exceeds the actual value of the ETH held. Financing methods include a market offering, a registered direct offering, or starting with a pipeline. The key is that the financing must achieve value-added , otherwise early investors and shareholders will think that you are diluting their interests simply by increasing your holdings of ETH. If financing is efficient, the cost of acquiring ETH is reasonable, and staking yields returns, the value of each ETH share will increase over time. As long as financing can increase the value of each ETH share, it is an added value for shareholders. Of course, the net asset value (NAV) or main net asset value (MNAV) multiple can be high or fall below 1, which is largely affected by market sentiment and will eventually revert to the mean in the long run. Therefore, it is necessary to avoid raising funds when there is actual dilution of shareholder equity. One should wait until the multiple recovers before conducting financing, purchasing ETH, and staking operations. Chris Perkins: So essentially you're monitoring the average net asset value (MNAV). If the MNAV is less than 1, in many cases, that's a buying opportunity. Joseph Chalom: ETH attracts the following types of investors: 1. Retail investors and long-term holders who believe in the long-term capital appreciation potential of Ethereum. Even without considering staking returns, they actively hold Ethereum through public financial companies like us to seek asset appreciation and passive income. 2. Some investors prefer Ethereum's current high volatility, especially given the increasing institutionalization of Bitcoin and the relatively increased volatility of Ethereum. 3. Investors who are willing to participate in Gamma trading through an equity-linked structure to earn returns on their lending capital. A key reason I joined Sharplink was not only to establish a shared understanding as a strategic partner, but also to attract top institutional talent and conduct business in a risk-adjusted manner. The biggest risk today is no longer regulation, but how we behave and the types of risks we are willing to take in pursuit of returns. (IV) Talent and Risk: The Core Secret to Building an Excellent Team Chris Perkins: How do you find and attract multi-talented individuals who are proficient in both DeFi and traditional finance (e.g., Wall Street)? How do you address security risks like hacker attacks and smart contract vulnerabilities? Joseph Chalom: Talent is actually relatively easy to find. I previously led the digital assets team at BlackRock. We started with a single core member and gradually built a lean team of five strategists and seven engineers. Leveraging BlackRock's brand and reputation, we raised over $100 billion in a year and a half. This demonstrates that a small, focused team, focused on a few key areas, can achieve significant results. We recruit only the brightest and most mission-driven individuals, adhering to a single principle: we reject arrogance and negativity. We seek individuals who truly share our vision for long-term change. These individuals aren't simply optimistic about ETH price increases or pursuing short-term capital management, but rather believe in the profound and lasting structural transformation of the industry and are committed to participating in it. Excellent talents often come from recommendations from trusted people, not headhunters. The risks are more complex. Excessive pursuit of extremely high returns, anxious pursuit of every possible basis point of gain, or measuring progress over an overly short timeframe can easily lead to mistakes. We view ourselves as a long-term opportunity, and therefore should accumulate assets steadily. Risk primarily stems from our operational approach : for every $1 raised, we purchase $1 worth of ETH, ultimately building a portfolio of billions of ETH. This portfolio requires systematic management, encompassing a variety of methods, from the most basic and secure custodial staking to liquidity staking, re-staking, revolving strategies, and even over-the-counter lending. Each approach introduces potential risk and leverage. Risk itself can bring rewards. However, if you don't understand the risks you are taking, you shouldn't enter this field. You must clearly identify smart contract risk, protocol risk, counterparty risk, term risk, and even the convexity characteristics of the transaction, and use this to establish an effective risk-reward boundary . Our goal is to build an ideal investment portfolio, not to pursue high daily returns , but to consistently win the game. This means creating genuine value for investors. Those who blindly pursue returns or lack a clear understanding of their own operations may actually create resistance for the entire industry. Chris Perkins: Is risk management key to long-term success? Do you plan to drive business success through a lean team and low operating cost model? Joseph Chalom: Looking back on my time at BlackRock, one thing stands out: the more successful a product is, the more humble it requires . Success is never the product of a few individuals. Our team is merely the tip of the spear in the overall system, backed by a strong brand reputation, distribution channels, and a large, trusted trustee. One of the great appeals of the digital asset business is its high scalability. While you'll need specialized teams like compliance and accounting to meet the requirements of a public company, the team actually responsible for fundraising can be very lean. Whether you're managing $3.5 billion or $35 billion in ETH, scale itself isn't crucial. If you build an efficient portfolio that can handle $1 billion in assets, it should be able to scale even further. The core issue is that when the scale becomes extremely large, on the one hand, caution must be exercised to avoid interfering with or questioning the security and stability of the protocol; on the other hand, it must be ensured that the pledged assets can still maintain sufficient liquidity under adverse circumstances. Chris Perkins: In asset management, how do you understand and implement the first principle that "treasures don't exist to lose money"? Joseph Chalom: At BlackRock, they used to say that if 65% to 70% of the assets you manage are pensions and retirement funds, you can't afford to lose anything. Because if we make a mistake, many people will not be able to retire with dignity. This is not only a responsibility, but also a heavy mission. (V) How SharpLink Gains an Advantage in Competition Chris Perkins: In the long term, how do you plan to position yourself to deal with competition from multiple fronts, including ETH and other tokens? Joseph Chalom: We can learn from Michael Saylor's strategy, but the fund management approach for ETH is completely different because it has higher yield potential . I view competitors as worthy of support. We have great respect for teams like BM&R. Many participants from traditional institutions recognize this as a long-term opportunity. There are two main ways to participate: directly holding ETH or generating income through ecosystem applications. We welcome this competition; the more participants, the more prosperous the industry. Ultimately, this space may be dominated by a small number of institutions actively accumulating ETH. We differentiate ourselves primarily through three key areas: First, we are the most trusted team among institutions . Despite our small size, we bring together top experts to manage assets with professionalism and rigor. Second, our partnership with ConsenSys . Their expertise provides us with a unique strategic advantage. Third, operating the business . In addition to accumulating and increasing the value of assets, we also operate a company focused on affiliate marketing in the gaming industry to ensure compliance with SEC and Nasdaq regulatory requirements. In the future, earning ETH through operational operations will create a powerful growth flywheel . Staking income, compounding debt interest, and ETH-denominated income will collectively accelerate the expansion of fund reserves. This approach may not be suitable for all ETH fund managers. (VI) Strategic Layout: Mergers and Acquisitions and Global Expansion Plans Chris Perkins: What is your overall view and direction on future M&A strategy? Joseph Chalom: If the amount of ETH debt grows significantly and some of this debt is illiquid, this could present opportunities. Currently, listed companies in this sector primarily raise capital through daily market programs. If the stock is liquid, this channel can be effectively utilized. However, some companies struggling to raise capital may trade at a discount to net assets or seek mergers, which could be an innovative way to acquire more ETH. As the industry matures, yields could gradually increase from 0.5%-1% of ETH supply to 1.5%-2.5%. It might be wise to issue sister bonds with similar structures in different regions, such as Asia or Europe, with identical issuance conditions and shared core operating costs and infrastructure, thereby reaching a wider range of investors. We expect to engage in such creative mergers and acquisitions in the future, but the specific timing is still uncertain. I believe that the industry will first undergo an initial phase of differentiation before entering a period of consolidation . Technological development and business evolution often follow this pattern. Similar consolidation and M&A trends are likely to occur in the stablecoin sector, which will be worth watching. Chris Perkins: Why is transparency so important ? What is the main motivation for disclosing operational details on a daily basis? Joseph Chalom: Most companies don't issue shares frequently, typically only once every few years. SEC regulations require companies to disclose the number of shares outstanding only in their quarterly reports. In our industry, fundraising may occur daily, weekly, or at other frequencies. Therefore, to fully reflect operational status, a series of key metrics must be publicly disclosed . These include: the amount of ETH held, total funds raised, weekly ETH increase, whether ETH is actually held or only held in derivatives, collateralization ratio, and returns. We publish press releases and AK documents every Tuesday morning to update investors on this data. Although some indicators may not be favorable in the short term, transparent operations will enhance investor trust and retention in the long term. Investors have the right to clearly understand the products they are purchasing, and concealing information will make it difficult to gain a foothold. (VII) SharpLink's growth plan for the next 12 to 18 months Chris Perkins: What are your plans or visions for the company's development in the next one to one and a half years? Joseph Chalom: Our first priority is to build a world-class team, but this won't happen overnight. We've continued to recruit key talent and have assembled a lean team of fewer than 20 people, each of whom excels in their field and works collaboratively to drive growth. Second, continue to raise funds in a manner that does not dilute shareholder equity , and flexibly adjust fundraising efforts according to market rhythms. The long-term goal is to continuously increase the concentration of ETH per share. Third, actively accumulate ETH. If you firmly believe in the potential of Ethereum, you should seize the opportunity to increase your holdings efficiently at the lowest cost - even for funds that only allocate 5% to ETH. Fourth, we must deeply integrate into the ecosystem . As an Ethereum company or treasury, we would be remiss if we didn't leverage our ETH holdings to create value for the ecosystem. We can leverage billions of ETH to support protocol development through lending, providing liquidity, and other means, advancing the protocol in a way that benefits the ecosystem. Finally, I hope that in a year and a half, we can establish one or two companies that support the closed loop of transactions in the Ethereum ecosystem and generate ETH-denominated revenue, thus forming a virtuous circle. (8) Core investment insights: Key areas for future attention Chris Perkins: What additional advice or information would you like to add to potential investors who are considering including SBET in their investment plans? Joseph Chalom: The current traditional financial system suffers from significant friction, with inefficient capital flows and delayed transaction settlements, sometimes requiring T+1 settlements at the fastest. This creates significant settlement, counterparty, and collateral management risks. This transformation will begin with stablecoins. Currently, the market for stablecoins has reached $275 billion, primarily running on Ethereum . However, the real potential lies in tokenized assets. As Minister Besant stated, stablecoins are expected to grow from their current levels to $2-3 trillion over the next few years. Tokenized assets such as funds, stocks, bonds, real estate, and private equity could reach trillions of dollars and run on decentralized platforms like Ethereum. Some are drawn to its potential for returns, while many more are optimistic about its future. Ether isn't just a commodity; it can generate returns. With trillions of dollars in stablecoins pouring into the Ethereum ecosystem, Ether has undoubtedly become a strategic asset. Building a strategic reserve of Ether is essential because you need a certain supply to ensure the flow of dollars and assets within the system. I can't think of an asset with more strategic significance. More importantly, the issuance of on-chain securities like those by Superstate and Galaxy marks one of the biggest unlockings in blockchain technology. Real-world assets are no longer locked in escrow boxes, but are now directly integrated into the ecosystem through tokenization. This is a turning point that has yet to be widely recognized, but will profoundly change the financial landscape. Chris Perkins: The pace of development is far exceeding expectations. Regulated assets are only just beginning to be implemented; as more of these assets continue to emerge, a whole new ecosystem is forming that will greatly accelerate the development and integration of assets on Ethereum and other blockchains. Joseph Chalom: When discussing the need for tokenization, people often cite features such as programmability, borderlessness, instant or atomic settlement, neutrality, and trustworthiness. However, a deeper reason lies in the current highly fragmented global financial system: assets like stocks and bonds are restricted to trading in specific locations, lack interoperability, and each transaction typically requires fiat currency. In the future, with the realization of instant settlement and composability, smart contracts will support automated trading and asset rebalancing, almost returning to the flexible exchange of "barter." For example, why can't the S&P 500 index be traded as a Mag 7 combination? Whether through swaps, lending, or other forms, financial instruments will become highly composable, breaking the traditional concept of " trading in a specific venue . " This will not only unleash enormous economic potential but also reshape the entire financial ecosystem by reconstructing the underlying logic of value exchange. As for SBET, we plan to launch a compliant tokenized version in the near future, prioritizing Ethereum over Solana as the underlying infrastructure.
Share
PANews2025/09/18 16:00
Ethereum 'flippening' odds rise, but it won't involve Bitcoin

Ethereum 'flippening' odds rise, but it won't involve Bitcoin

Polymarket traders now see a real risk of ETH losing its number-two crypto ranking in 2026, with odds jumping from 17% to over 59% this year.
Share
Coin Telegraph2026/03/29 20:25
Turning Innovation into Impact: Otterpack wins CER Prize for global innovations

Turning Innovation into Impact: Otterpack wins CER Prize for global innovations

CER Team: Eddie, first of all, a huge congratulations to you and the team! Winning the CER Innovation Prize is a major nod to Otterpack’s impact. How does it feel
Share
Techbullion2026/03/29 20:29