The post How Batched Threshold Encryption could end extractive MEV and make DeFi fair again appeared on BitcoinEthereumNews.com. Batched Threshold Encryption (BTE) builds on foundational concepts such as threshold cryptography, which enable secure collaboration among multiple parties without exposing sensitive data to any single participant. BTE is an evolution of the earliest TE-encrypted mempool schemes, such as Shutter, which we have covered previously. For now, all existing work on BTE remains at the prototype or research stage, but it could shape the future of decentralized ledgers if successful. This creates a clear opportunity for more research and potential adoption, which we will explore in this article.  On most modern blockchains, transaction data is publicly viewable in the mempool before it is sequenced, executed and confirmed in a block. This transparency creates avenues for sophisticated parties to engage in extractive practices known as Maximal Extractable Value (MEV). MEV exploits the block proposer’s ability to reorder, include or omit transactions for financial gain.  Typical forms of MEV exploitation, such as frontrunning and sandwich attacks, remain pervasive, particularly on Ethereum, where, during the flash crash on Oct. 10, an estimated $2.9 million was extracted. Accurately measuring total extractive MEV remains difficult because roughly 32% of these attacks were privately relayed to miners, with some involving over 200 chained subtransactions in a single exploit. Some researchers have sought to prevent MEV with mempool designs, where pending transactions are held encrypted until block finalization. This prevents other blockchain participants from seeing what trades or actions the transacting users are about to take. Many encrypted mempool proposals use some form of threshold encryption (TE) for this. TE splits a secret key that can unveil the transaction data among several servers. Akin to a multisig, a minimum number of signers must work together to combine their key shares and unlock the data. Why BTE matters Standard TE struggles to scale efficiently because every server must… The post How Batched Threshold Encryption could end extractive MEV and make DeFi fair again appeared on BitcoinEthereumNews.com. Batched Threshold Encryption (BTE) builds on foundational concepts such as threshold cryptography, which enable secure collaboration among multiple parties without exposing sensitive data to any single participant. BTE is an evolution of the earliest TE-encrypted mempool schemes, such as Shutter, which we have covered previously. For now, all existing work on BTE remains at the prototype or research stage, but it could shape the future of decentralized ledgers if successful. This creates a clear opportunity for more research and potential adoption, which we will explore in this article.  On most modern blockchains, transaction data is publicly viewable in the mempool before it is sequenced, executed and confirmed in a block. This transparency creates avenues for sophisticated parties to engage in extractive practices known as Maximal Extractable Value (MEV). MEV exploits the block proposer’s ability to reorder, include or omit transactions for financial gain.  Typical forms of MEV exploitation, such as frontrunning and sandwich attacks, remain pervasive, particularly on Ethereum, where, during the flash crash on Oct. 10, an estimated $2.9 million was extracted. Accurately measuring total extractive MEV remains difficult because roughly 32% of these attacks were privately relayed to miners, with some involving over 200 chained subtransactions in a single exploit. Some researchers have sought to prevent MEV with mempool designs, where pending transactions are held encrypted until block finalization. This prevents other blockchain participants from seeing what trades or actions the transacting users are about to take. Many encrypted mempool proposals use some form of threshold encryption (TE) for this. TE splits a secret key that can unveil the transaction data among several servers. Akin to a multisig, a minimum number of signers must work together to combine their key shares and unlock the data. Why BTE matters Standard TE struggles to scale efficiently because every server must…

How Batched Threshold Encryption could end extractive MEV and make DeFi fair again

Batched Threshold Encryption (BTE) builds on foundational concepts such as threshold cryptography, which enable secure collaboration among multiple parties without exposing sensitive data to any single participant. BTE is an evolution of the earliest TE-encrypted mempool schemes, such as Shutter, which we have covered previously. For now, all existing work on BTE remains at the prototype or research stage, but it could shape the future of decentralized ledgers if successful. This creates a clear opportunity for more research and potential adoption, which we will explore in this article. 

On most modern blockchains, transaction data is publicly viewable in the mempool before it is sequenced, executed and confirmed in a block. This transparency creates avenues for sophisticated parties to engage in extractive practices known as Maximal Extractable Value (MEV). MEV exploits the block proposer’s ability to reorder, include or omit transactions for financial gain. 

Typical forms of MEV exploitation, such as frontrunning and sandwich attacks, remain pervasive, particularly on Ethereum, where, during the flash crash on Oct. 10, an estimated $2.9 million was extracted. Accurately measuring total extractive MEV remains difficult because roughly 32% of these attacks were privately relayed to miners, with some involving over 200 chained subtransactions in a single exploit.

Some researchers have sought to prevent MEV with mempool designs, where pending transactions are held encrypted until block finalization. This prevents other blockchain participants from seeing what trades or actions the transacting users are about to take. Many encrypted mempool proposals use some form of threshold encryption (TE) for this. TE splits a secret key that can unveil the transaction data among several servers. Akin to a multisig, a minimum number of signers must work together to combine their key shares and unlock the data.

Why BTE matters

Standard TE struggles to scale efficiently because every server must decrypt each transaction separately and broadcast a partial decryption share for it. These individual shares are recorded onchain for aggregation and verification. This creates a server communication load that slows the network and increases chain congestion. BTE solves this limitation by allowing each server to release a single constant-sized decryption share that unlocks an entire batch, regardless of size. 

The first functional version of BTE, developed by Arka Rai Choudhuri, Sanjam Garg, Julien Piet and Guru-Vamsi Policharla (2024), used the so-called KZG commitment scheme. It lets the committee of servers lock a polynomial function to a public key while keeping that function initially hidden from both users and committee members.

Decrypting transactions that are encrypted to the public key requires proving that they fit into the polynomial. Because a polynomial of fixed degree can be fully determined from a set number of points, the servers only need to collectively exchange a small amount of data to provide this proof. Once the shared curve is established, they can send out a single compact piece of information derived from it to unlock all transactions in the batch at once.  

Importantly, transactions that do not fit within the polynomial remain locked, so the committee can selectively reveal a subset of the encrypted transactions while keeping others hidden. This guarantees that all encrypted transactions outside the selected batch for execution remain encrypted.

Current TE implementations, such as Ferveo and MEVade, could therefore integrate BTE to preserve privacy for non-batch-included transactions. BTE also fits naturally with layer-2 rollups such as Metis, Espresso and Radius, which already pursue fairness and privacy through time-delay encryption or trusted sequencers. By using BTE, these rollups could achieve a trustless ordering process that prevents anyone from exploiting transaction visibility for arbitrage or liquidation gains.

However, this first version of BTE had two major drawbacks: It required a full reinitialization of the system, including a new round of key generation and parameter setup each time a new batch of transactions was encrypted. Decryption consumed significant memory and processing power as nodes worked to combine all partial shares.

Both of these factors limited BTE’s practicality; for instance, the required frequent DKG execution for committee refresh and block processing made the scheme effectively prohibitive for moderately sized permissioned committees, let alone any attempt to scale to a permissionless network.

For cases of selective decryption, where validators only decrypt profitable transactions, BTE makes all decryption shares publicly verifiable. This allows anyone to detect dishonest behavior and penalize offenders via slashing. It keeps the process reliable as long as a threshold of honest servers remains active.

Upgrades to BTE

Choudhuri, Garg, Policharla and Wang (2025) made the first upgrade to BTE to improve server communication through a scheme called the one-time setup BTE. This scheme only required a single initial Distributed Key Generation (DKG) ceremony that runs once across all decryption servers. However, a multiparty computation protocol was still required to set up the commitment for each batch.

The first truly epochless BTE scheme came in August 2025 when Bormet, Faust, Othman and Qu introduced BEAT-MEV as a single, one-time initialization that could support all future batches. It achieved this using two advanced tools, puncturable pseudorandom functions and threshold homomorphic encryption, allowing servers to reuse the same setup parameters indefinitely. Each server only needed to send a small piece of data when decrypting, thus keeping server communication costs low.

Overview of projected performance

Down the line, another paper called BEAST-MEV introduced the concept of Silent Batched Threshold Encryption (SBTE) that removed the need for any interactive setup between servers. It replaced repeated coordination with a non-interactive, universal one-time setup that allows nodes to operate independently.

However, combining all the partial decryptions afterward still required heavy interactive computation. To fix this, BEAST-MEV borrowed BEAT-MEV’s sub-batching technique and used parallel processing to let the system decrypt large batches (up to 512 transactions) in under one second. The following table summarizes how each successive BTE design improves on the original BTE design.

BTE’s potential also holds for protocols such as CoW Swap that already mitigate MEV through batch auctions and intent-based matching, yet still expose parts of the order flow in public mempools. Integrating BTE before solver submission would seal that gap and provide end-to-end transaction privacy. For now, Shutter Network remains the most promising candidate for early adoption, with other protocols likely to follow once implementation frameworks become more mature.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Cointelegraph does not endorse the content of this article nor any product mentioned herein. Readers should do their own research before taking any action related to any product or company mentioned and carry full responsibility for their decisions.

Source: https://cointelegraph.com/news/how-batched-threshold-encryption-could-end-extractive-mev-and-make-defi-fair-again?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

Market Opportunity
DeFi Logo
DeFi Price(DEFI)
$0.000574
$0.000574$0.000574
-0.34%
USD
DeFi (DEFI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

South Korea Launches Innovative Stablecoin Initiative

South Korea Launches Innovative Stablecoin Initiative

The post South Korea Launches Innovative Stablecoin Initiative appeared on BitcoinEthereumNews.com. South Korea has witnessed a pivotal development in its cryptocurrency landscape with BDACS introducing the nation’s first won-backed stablecoin, KRW1, built on the Avalanche network. This stablecoin is anchored by won assets stored at Woori Bank in a 1:1 ratio, ensuring high security. Continue Reading:South Korea Launches Innovative Stablecoin Initiative Source: https://en.bitcoinhaber.net/south-korea-launches-innovative-stablecoin-initiative
Share
BitcoinEthereumNews2025/09/18 17:54
Trump Cancels Tech, AI Trade Negotiations With The UK

Trump Cancels Tech, AI Trade Negotiations With The UK

The US pauses a $41B UK tech and AI deal as trade talks stall, with disputes over food standards, market access, and rules abroad.   The US has frozen a major tech
Share
LiveBitcoinNews2025/12/17 01:00
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40