Author: a16z
Compiled by: Deep Tide TechFlow
a16z (Andreessen Horowitz) recently released its list of "big ideas" that may emerge in the technology sector in 2026. These ideas were proposed by partners from its Apps, American Dynamism, Biotechnology, Cryptocurrency, Growth, Infrastructure, and Speedrun teams.
Below are some selected key ideas and insights from special contributors to the cryptocurrency space, covering a wide range of topics from smart agents and artificial intelligence (AI), stablecoins, tokenization and finance, privacy and security to prediction markets and other applications. For more on the technology outlook for 2026, please read the full article.
Today, aside from stablecoins and some core infrastructure, almost all well-performing cryptocurrency companies have transformed into or are in the process of transforming into trading platforms. However, what will be the ultimate result if "every crypto company becomes a trading platform"? A large amount of homogeneous competition will not only distract users but may also leave only a few winners. Companies that shift to trading too early may miss the opportunity to build more competitive and sustainable business models.
I fully understand the challenges founders face in maintaining a healthy financial position, but solely pursuing short-term product-market fit can come at a cost. This is particularly pronounced in the crypto industry, where the unique dynamics surrounding tokens and speculation often lead founders down a path of "instant gratification," much like a "marshmallow test."
There's nothing wrong with transactions themselves—they are indeed an important function of market operations—but they aren't necessarily the ultimate goal. Founders who focus on the product itself and seek product-market fit with a long-term perspective are likely to be the bigger winners in the end.
– Arianna Simpson, General Partner, a16z Crypto Team
We have seen banks, fintech companies, and asset management firms showing great interest in bringing U.S. stocks, commodities, indices, and other traditional assets onto the blockchain. However, as more and more traditional assets are brought onto the blockchain, their tokenization is often “physical”—that is, based on existing real-world asset concepts, without fully utilizing the native features of crypto.
In contrast, synthetic asset forms like perpetual futures (perps) offer deeper liquidity and are simpler to implement. Perps also provide an easily understood leverage mechanism, making them potentially the most suitable native derivative for the crypto market. Emerging market equities are perhaps one of the most interesting asset classes to explore for "perpify." For example, for some stocks, the liquidity of their zero-expiration-date (0DTE) options markets is often deeper than the spot market, making perpify a worthwhile experiment.
Ultimately, it all boils down to the choice between "persistence" and "tokenization"; in any case, we have reason to expect to see more crypto-native real-world asset tokenization in the coming year.
Similarly, in 2026, the stablecoin sector will see more "issuance innovation, not just tokenization." Stablecoins became mainstream in 2025, and their issuance continues to grow.
However, stablecoins lacking a strong credit infrastructure are more like "narrow banks," holding specific, highly liquid assets that are considered extremely safe. While narrow banks are an effective product, I don't believe they will become the long-term pillar of the on-chain economy.
We've seen many emerging asset managers, curators, and protocols pushing for on-chain asset-backed loans secured by off-chain collateral. Typically, these loans are generated off-chain and then tokenized. However, I believe this tokenization approach offers limited advantages, perhaps only in distributing them to users already on-chain. Therefore, debt assets should be generated directly on-chain, rather than generated off-chain and then tokenized. Generating debt assets on-chain reduces loan servicing costs, back-end infrastructure costs, and improves accessibility. The challenges lie in compliance and standardization, but developers are working to address these issues.
– Guy Wuollet, General Partner, a16z Crypto Team
Today, most banks still run outdated software systems that are difficult for modern developers to recognize: banks were early adopters of large-scale software systems as early as the 1960s and 70s. By the 1980s and 90s, second-generation core banking software began to emerge (such as Temenos' GLOBUS and InfoSys' Finacle). However, this software has become aging, and upgrades have been far too slow. As a result, many of the banking industry's critical core ledgers—these key databases that record deposits, collateral, and other obligations—still run on mainframe computers using the COBOL programming language, relying on batch file interfaces rather than modern APIs.
The majority of global assets are still stored in these decades-old core ledgers. While these systems have been proven in practice, trusted by regulators, and deeply integrated into complex banking scenarios, they have also become an obstacle to innovation. For example, adding key features such as real-time payments can take months or even years, and involves dealing with substantial technological debt and complex regulatory requirements.
This is precisely where stablecoins come in. Over the past few years, stablecoins have found a product-market fit and successfully entered the mainstream financial arena. This year, traditional financial institutions (TradFi) have embraced stablecoins at an unprecedented level. Financial instruments such as stablecoins, tokenized deposits, tokenized government bonds, and on-chain bonds enable banks, fintech companies, and financial institutions to develop new products and serve more customers. More importantly, these innovations do not force institutions to rewrite their legacy systems—though these systems are aging, they have been running stably for decades. Stablecoins thus provide institutions with a completely new way to innovate.
– Sam Broner
As a mathematical economist, at the beginning of this year, I found it incredibly difficult to get consumer-grade AI models to understand my workflow; however, by November, I could give them abstract instructions as if they were PhD students… and they would sometimes return entirely new and correctly executed answers. Moreover, we are beginning to see AI used in a wider range of research areas—especially in reasoning, where AI models are now not only directly assisting in discovery but also autonomously solving the Putnam problem (perhaps the world's most difficult college-level math exam).
What remains unclear is in which areas this research-aiding approach will be most helpful, and how. However, I anticipate that AI's research capabilities will foster and inspire a new "polymath" research style: one that tends to speculate on relationships between various ideas and quickly extrapolate from more hypothetical answers. These answers may not be entirely accurate, but at least within certain logical frameworks, they can point in the right direction. Ironically, this approach is somewhat like harnessing the power of model "illusion": when these models become "smart" enough, allowing them to freely explore abstract spaces may produce some nonsensical ideas, but sometimes it can also lead to groundbreaking discoveries, much like how humans are most creative when they break free from linear thinking and step outside of clear directions.
Thinking about problems in this way requires a completely new AI workflow—not just a "proxy-to-proxy" model, but a more complex "proxy-wrapped-proxy" model—in which different layers of models assist researchers in evaluating early-stage models and gradually extracting valuable insights. I have used this method to write papers, while others have used it for patent searches, inventing new forms of art, and even (unfortunately) discovering new ways to attack smart contracts.
However, to run this research model of "wrapped reasoning agent", better interoperability between models is needed, and a way to identify and reasonably compensate for the contribution of each model is needed—and these are the problems that encryption technology can help solve.
– Scott Kominers, member of the a16z cryptography research team, professor at Harvard Business School
With the rise of AI agents, a kind of "hidden tax" is oppressing the open internet and fundamentally disrupting its economic foundation. This disruption stems from the growing asymmetry between the internet's contextual and execution layers : currently, AI agents extract data from ad-supported content websites (the contextual layer) to provide convenience to users, while systematically bypassing the revenue streams that support content creation (such as advertising and subscriptions).
To prevent further decline of the open web (and to protect the diverse content fueling AI), we need to deploy technological and economic solutions on a large scale. This could include next-generation sponsored content, micro-attribution systems, or other innovative funding models. Existing AI licensing agreements have also proven to be short-sighted stopgap measures, typically only compensating content providers for a small fraction of the revenue lost due to AI traffic encroachment.
The internet needs a completely new techno-economic model that allows value to flow automatically. The most critical shift next year will be from a static authorization model to a compensation model based on real-time usage. This means testing and scaling systems—potentially leveraging blockchain-backed nanopayments and sophisticated attribution criteria—to automatically reward each entity that contributes information to the successful completion of tasks by AI agents.
– Liz Harkavy, a16z Crypto Investment Team
Privacy is one of the key features driving global finance onto the blockchain. However, it is also a crucial element lacking in almost all blockchains today. For most blockchains, privacy is often merely a secondary, afterthought.
However, privacy itself is now a key differentiator for blockchain technology. More importantly, privacy can also create a "chain lock-in," or a privacy network effect. This is especially important in an era where performance competition is no longer a sufficient advantage.
Cross-chain bridge protocols make migrating between different chains incredibly easy, as long as all information is public. However, this convenience vanishes once privacy is introduced: transferring tokens across chains is easy, but transferring privacy across chains is extremely difficult. Users face risks when moving in and out of a privacy chain, whether switching to a public chain or another privacy chain, because those observing on-chain data, mempools, or network traffic could potentially deduce their identity. Crossing the boundary between a privacy chain and a public chain, or even between two privacy chains, can leak various metadata, such as the correlation between transaction times and amounts—information that could make tracking users much easier.
Compared to many homogeneous new chains whose transaction fees may be driven down to near zero due to competition, blockchains with privacy features can generate stronger network effects. The reality is that if a "general-purpose" blockchain doesn't have a mature ecosystem, killer applications, or unfair distribution advantages, there's little reason for users to choose to use it or build on it, let alone develop loyalty.
On public blockchains, users can easily transact with users on other chains—it doesn't matter which chain they join. However, on private blockchains, the chain users choose to join becomes particularly important, because once joined, they are less likely to migrate to other chains to avoid the risk of privacy exposure. This phenomenon creates a "winner-takes-all" dynamic. And because privacy is crucial for most real-world applications, a few privacy chains may ultimately dominate the crypto space.
– Ali Yahya, General Partner of the a16z Crypto Team
Prediction markets have gradually entered the mainstream, and in the coming year, with their convergence with cryptography and artificial intelligence (AI), they will become larger, more widely used, and more intelligent, while also bringing new and significant challenges to developers.
First, more contracts will be listed in prediction markets. This means we'll not only have access to real-time odds on major elections or geopolitical events, but also predictions for a wide range of nuanced outcomes and complex cross-events. As these new contracts uncover more information and gradually integrate into the news ecosystem (a trend that has already begun), they will raise important social questions, such as how to balance the value of information and how to better design these markets to make them more transparent and auditable—questions that can be addressed through cryptography.
To address the surge in new contracts, we need new ways to reach consensus on real-world events to resolve these contracts. While centralized platform solutions (such as confirming whether an event actually occurred) are important, their limitations have been exposed in controversial cases like the Zelensky lawsuit market and the Venezuelan election market. To address these marginal cases and help expand prediction markets into more practical applications, novel decentralized governance mechanisms and Large Language Model (LLM) oracles can assist in determining the truth behind disputed outcomes.
The potential of AI extends beyond LLM-driven oracles. For example, AI agents active on these platforms can gather signals globally to gain short-term trading advantages. This not only helps us see the world from entirely new perspectives but also allows for more accurate predictions of future trends. (Projects like Prophet Arena have already fueled the field's excitement.) Beyond providing insights as sophisticated political analysts, these AI agents may also reveal fundamental predictive factors for complex social events as we examine their emerging strategies.
Will prediction markets replace opinion polls? No. On the contrary, they will improve opinion polls (and opinion poll information can also be fed into prediction markets). As a professor of political economy, I am most excited about the potential for prediction markets to work in tandem with the diverse ecosystem of opinion polls—but we will need to rely on new technologies, such as AI, which can improve the survey experience, and encryption, which can provide entirely new ways to verify that survey and questionnaire participants are human and not robots.
– Andy Hall, Crypto Research Advisor at a16z, Professor of Political Economy at Stanford University
For years, SNARKs (zero-knowledge concise non-interactive proofs, a type of cryptographic proof that verifies the correctness of a proof without re-performing the computation) have been primarily used in the blockchain field. This is because their computational overhead is prohibitively large: proving a computation can be a million times more laborious than directly running that computation. In scenarios where this overhead needs to be distributed among tens of thousands of validators, it is worthwhile, but in other scenarios it is impractical.
This situation is about to change. By 2026, the computational overhead of zkVM (zero-knowledge virtual machine) provers will be reduced to approximately 10,000 times, while their memory footprint will only be a few hundred megabytes—fast enough to run on mobile phones and cheap enough for widespread application across various scenarios. One reason why this "10,000 times" might be a critical tipping point is that the parallel throughput of high-end GPUs is approximately 10,000 times that of laptop CPUs. By the end of 2026, a single GPU will be able to generate computational proofs that would otherwise require CPU execution in real time.
This will unlock some of the visions proposed in earlier research papers: verifiable cloud computing. If you are already running CPU workloads in the cloud (because your computational tasks are insufficient for GPU acceleration, or you lack the relevant expertise, or for historical reasons), you will be able to obtain cryptographic proofs of computational correctness at a reasonable cost. Moreover, the prover is already optimized for GPUs, requiring no additional tweaks to your code.
– Justin Thaler, member of the a16z cryptography research team, Associate Professor of Computer Science at Georgetown University
— a16z Encrypted Editing Team


