Finance requires architecture specifically designed for parallel processing, composable primitives, and institutional compliance.Finance requires architecture specifically designed for parallel processing, composable primitives, and institutional compliance.

Financial infrastructure requires rethinking blockchain architecture | Opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

The crypto industry has an infrastructure problem that’s rarely discussed directly: we’ve been building financial systems on blockchains that weren’t designed for finance, which requires us to rethink blockchain architecture.

Summary
  • General-purpose blockchains struggle with finance. Sequential execution creates bottlenecks; financial transactions need parallel processing to scale efficiently.
  • Composability drives ecosystem value. Shared infrastructure primitives allow protocols to build on each other, reducing fragmentation and enabling capital-efficient, yield-bearing products.
  • Institutional adoption requires infrastructure, not just features. Permissioned compliance, KYC, and auditing modules on decentralized systems are prerequisites for serious institutional participation.

I noticed this the moment we started building Momentum. Most protocols launch as isolated products, a DEX, a lending market, a staking solution, treating each as a separate tool rather than part of an interconnected system. But this fragmentation reveals a deeper architectural mismatch. The blockchain layer underneath simply wasn’t built to handle what finance demands: parallel processing at scale, composable primitives, and infrastructure that other protocols can reliably build upon.

This isn’t theoretical. It manifests in transaction failures during peak demand, capital inefficiency in liquidity markets, and an ecosystem where each protocol operates in isolation rather than synergistically.

The real constraint: Blockchains weren’t designed for finance

When we were deciding where to build our DEX, the choice was obvious to me but seemed counterintuitive to many. Everyone asked: Why not Ethereum (ETH)? The answer reveals everything about how I think about infrastructure.

Consider the fundamental difference between how Ethereum and Sui (SUI) process transactions. Ethereum’s sequential execution model means every transaction must be processed in order, creating bottlenecks under load. This wasn’t a bug in Ethereum’s design; it was never the intended use case. Ethereum was built to be a general-purpose compute platform.

Finance demands something different. Most financial operations are independent. When Alice swaps tokens and Bob stakes assets, these transactions don’t depend on each other. Sequential processing creates artificial congestion. Parallel processing is not just an optimization; it’s structurally necessary.

Sui was built from the ground up with parallel execution and object-centric design using the Move programming language. This architectural choice isn’t just faster — it enables an entirely different category of financial products to exist at scale.

The proof came faster than we expected. In six months, our DEX scaled from zero to $500M in liquidity and $1.1B in daily trading volume, accumulating $22B in cumulative trading volume while onboarding 2.1 million users without meaningful congestion. Processing that kind of volume without transaction failures isn’t a marketing achievement; it’s evidence of fundamental architectural soundness. Try achieving those metrics on a sequentially-executing blockchain and you’d see exactly why the architecture matters.

Why infrastructure composability matters more than individual products

There’s a second, more subtle problem I’ve learned to recognize: financial products should be composable building blocks, not isolated silos.

A properly designed financial infrastructure layer should allow other protocols to build on shared primitives. If every protocol has to build its own treasury management, its own staking solution, its own liquidity infrastructure, the ecosystem fragments. Developers spend time solving identical problems rather than innovating on new ones. I’ve watched this happen repeatedly across chains.

This is where most protocols fail. They build one product well, then the ecosystem around them calcifies. Each new protocol essentially starts from scratch.

When we built our protocol, we deliberately chose not to just create a DEX. We built infrastructure primitives that other protocols would rationally choose to use rather than rebuild. MSafe, our treasury management solution, now secures hundreds of millions across the Move ecosystem. Not because we forced adoption, but because it solved a real problem better than the alternatives.

More protocols building on shared infrastructure means more integration points, more composability, and higher system value for everyone. This only works if the primitives are actually good. Concentrated liquidity market-making technology with aligned incentives creates capital efficiency that traditional AMMs can’t match. Liquid staking that produces a yield-bearing receipt token creates collateral that’s simultaneously productive. Multi-signature treasury management that works reliably reduces friction for protocol governance.

These aren’t nice-to-have conveniences. They’re the difference between an ecosystem that compounds value and one that fragments. This is precisely what allows Momentum to provide infrastructure that other protocols rationally choose to build on rather than rebuild themselves.

The institutional capital problem is infrastructure, not features

Crypto has always struggled with institutional adoption. The standard explanation focuses on regulatory uncertainty or UX limitations. The real bottleneck is often simpler: institutions can’t use decentralized infrastructure that lacks compliance capabilities.

This isn’t a reason to centralize. It’s a reason to build the right layer on top of decentralized infrastructure. If you can offer permissioned compliance as an optional module, let institutional users verify their identity and trade with full regulatory clarity, while keeping the base infrastructure permissionless, you solve the problem without compromise.

Institutions won’t deploy serious capital into systems that can’t provide regulatory auditing, KYC verification, or compliance documentation. These aren’t features, they’re structural prerequisites for institutional participation. That’s not gatekeeping. It’s acknowledging reality.

The actual argument

Here’s the claim I’m making, separate from any particular protocol: Blockchains built for general computation cannot efficiently serve as financial infrastructure. Finance requires architecture specifically designed for parallel processing, composable primitives, and institutional compliance. Protocols will migrate toward blockchains with these properties—not because they’re trendy, but because the economics of operating on better infrastructure are simply superior.

This isn’t an argument that “Sui is better than Ethereum.” Ethereum can and should continue evolving. Layer-2 solutions are legitimate approaches. This is an argument that financial systems need to be built on different architectural foundations than general-purpose compute platforms.

The corollary is less obvious: if a blockchain is purpose-built for finance and achieves meaningful adoption, it becomes the natural foundation for financial innovation. Not because of marketing, but because other protocols rationally choose to build there.

The question for the industry isn’t which chain “wins.” It’s whether we’re willing to acknowledge that one-size-fits-all blockchain architecture was never the right approach, and that specialized infrastructure produces better financial outcomes.

That realization changes everything about how protocols should be built and where they should be deployed. It’s changing how I think about Momentum, and it should change how you think about where to build next.

ChefWen

ChefWen is the founder of Momentum, the Move Central Liquidity Engine. With a strong engineering background—including senior software engineering roles at Facebook’s Libra, and Amazon — Wendy combines deep technical expertise with visionary leadership to build scalable, industry-shaping solutions. Wendy holds Master’s degrees in Computer Engineering and in Operations Research in Industrial & Systems Engineering from Georgia Institute of Technology. At Momentum, Wendy is spearheading efforts to become the central liquidity engine for the Move ecosystem with the launch of the first multi-chain ve(3,3) DEX. Currently the #1 DEX on Sui. Her blend of high-level technical acumen, entrepreneurial drive, and cross-cultural perspective makes her a compelling speaker for audiences interested in the future of Web3, innovation, and software engineering.

Market Opportunity
FINANCE Logo
FINANCE Price(FINANCE)
$0.0002164
$0.0002164$0.0002164
-2.21%
USD
FINANCE (FINANCE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

South Korea Launches Innovative Stablecoin Initiative

South Korea Launches Innovative Stablecoin Initiative

The post South Korea Launches Innovative Stablecoin Initiative appeared on BitcoinEthereumNews.com. South Korea has witnessed a pivotal development in its cryptocurrency landscape with BDACS introducing the nation’s first won-backed stablecoin, KRW1, built on the Avalanche network. This stablecoin is anchored by won assets stored at Woori Bank in a 1:1 ratio, ensuring high security. Continue Reading:South Korea Launches Innovative Stablecoin Initiative Source: https://en.bitcoinhaber.net/south-korea-launches-innovative-stablecoin-initiative
Share
BitcoinEthereumNews2025/09/18 17:54
Trump Cancels Tech, AI Trade Negotiations With The UK

Trump Cancels Tech, AI Trade Negotiations With The UK

The US pauses a $41B UK tech and AI deal as trade talks stall, with disputes over food standards, market access, and rules abroad.   The US has frozen a major tech
Share
LiveBitcoinNews2025/12/17 01:00
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40