Sui has introduced Tidehunter, a purpose-built blockchain storage engine designed to replace RocksDB by reducing write amplification and delivering higher, moreSui has introduced Tidehunter, a purpose-built blockchain storage engine designed to replace RocksDB by reducing write amplification and delivering higher, more

Tidehunter: Sui’s Next-Generation Database Optimized For Low Latency And Reduced Write Amplification

6 min read
tidehunter:%20Sui%E2%80%99s%20Next-Generation%20Database%20Optimized%20For%20Low%20Latency%20And%20Reduced%20Write%20Amplification

Sui, a Layer 1 blockchain network, has introduced Tidehunter, a new storage engine engineered to align with the performance demands, data access characteristics, and operational constraints commonly found in contemporary blockchain infrastructures. 

The system is positioned as a potential successor to the existing database layer used by both validators and full nodes, reflecting a broader effort to modernize core infrastructure in response to the evolving scale and workload profiles of production blockchain environments.

Sui originally relied on RocksDB as its primary key–value storage layer, a widely adopted and mature solution that enabled rapid protocol development. As the platform expanded and operational demands increased, fundamental limitations of general-purpose LSM-tree databases became increasingly apparent in production-like environments. 

Extensive tuning and deep internal expertise could not fully address structural inefficiencies that conflicted with the access patterns typical of blockchain systems. This led to a strategic shift toward designing a storage engine optimized specifically for blockchain workloads, resulting in the development of Tidehunter.

A central factor behind this decision was persistent write amplification. Measurements under realistic Sui workloads showed amplification levels of roughly ten to twelve times, meaning that relatively small volumes of application data generated disproportionately large amounts of disk traffic. While such behavior is common in LSM-based systems, it reduces effective storage bandwidth and intensifies contention between background compaction and read operations. In write-intensive or balanced read-write environments, this overhead becomes increasingly restrictive as throughput scales. 

Load testing on high-performance clusters confirmed the impact, with disk utilization nearing saturation despite moderate application write rates, highlighting the growing mismatch between conventional storage architectures and modern blockchain performance requirements.

Tidehunter Architecture: A Storage Engine Optimized For Blockchain Access Patterns And Sustained High-Throughput Workloads

Storage behavior in Sui and comparable blockchain platforms is dominated by a small set of recurring data access patterns, and Tidehunter is architected specifically around these characteristics. A large portion of state is addressed using cryptographic hash keys that are evenly distributed and typically map to relatively large records, which removes locality but simplifies consistency and correctness. 

At the same time, blockchains rely heavily on append-oriented structures, such as consensus logs and checkpoints, where data is written in order and later retrieved using monotonically increasing identifiers. These environments are also inherently write-heavy, while still requiring fast access on latency-critical read paths, making excessive write amplification a direct threat to both throughput and responsiveness.

At the center of Tidehunter is a high-concurrency write pipeline built to exploit the parallel capabilities of modern solid-state storage. Incoming writes are funneled through a lock-free write-ahead log capable of sustaining extremely high operation rates, with contention limited to a minimal allocation step. 

Data copying proceeds in parallel, and the system avoids per-operation system calls by using writable memory-mapped files, while durability is handled asynchronously by background services. This design produces a predictable and highly parallel write path that can saturate disk bandwidth without becoming constrained by CPU overhead.

Reducing write amplification is treated as a primary architectural objective rather than an optimization step. Instead of using the log as a temporary staging area, Tidehunter stores data permanently in log segments and builds indexes that reference offsets directly, eliminating repeated rewrites of values. 

Indexes are heavily sharded to keep write amplification low and to increase parallelism, removing the need for traditional LSM-tree structures. For append-dominated datasets, such as checkpoints and consensus records, specialized sharding strategies keep recent data tightly grouped so that write overhead remains stable even as historical data grows.

For tables addressed by uniformly distributed hash keys, Tidehunter introduces a uniform lookup index optimized for predictable, low-latency access. Rather than issuing multiple small and random reads, the index reads a slightly larger contiguous region that statistically contains the desired entry, allowing most lookups to complete in a single disk round trip. 

This approach deliberately trades some read throughput for lower and more stable latency, a tradeoff that becomes practical because reduced write amplification frees substantial disk bandwidth for read traffic. The result is more consistent performance on latency-sensitive operations such as transaction execution and state validation.

To further control tail latency at scale, Tidehunter combines direct I/O with application-managed caching. Large historical reads bypass the operating system’s page cache to prevent cache pollution, while recent and frequently accessed data is retained in user-space caches informed by application-level access patterns. In combination with its indexing layout, this reduces unnecessary disk round trips and improves predictability under sustained load.

Data lifecycle management is also simplified. Because records are stored directly in log segments, removing obsolete historical data can be performed by deleting entire log files once they fall outside the retention window. This avoids the complex and I/O-intensive compaction mechanisms required by LSM-based databases and enables faster, more predictable pruning even as datasets expand.

Across workloads designed to reflect real Sui usage, Tidehunter demonstrates higher throughput and lower latency than RocksDB while consuming significantly less disk write bandwidth. The most visible improvement comes from the near elimination of write amplification, which allows disk activity to more closely match application-level writes and preserves I/O capacity for reads. These effects are observed both in controlled benchmarks and in full validator deployments, indicating that the gains extend beyond synthetic testing.

Evaluation is performed using a database-agnostic benchmark framework that models realistic mixes of inserts, deletions, point lookups, and iteration workloads. Tests are parameterized to reflect Sui-like key distributions, value sizes, and read-write ratios, and are executed on hardware aligned with recommended validator specifications. Under these conditions, Tidehunter consistently sustains higher throughput and lower latency than RocksDB, with the largest advantages appearing in write-heavy and balanced scenarios.

Validator-level benchmarks further confirm the results. When integrated directly into Sui and subjected to sustained transaction load, systems using Tidehunter maintain stable throughput and lower latency at operating points where RocksDB-backed deployments begin to suffer from rising disk utilization and performance degradation. Measurements show reduced disk pressure, steadier CPU usage, and improved finality latency, highlighting a clear divergence in behavior under comparable load.

Tidehunter represents a practical response to the operational demands of long-running, high-throughput blockchain systems. As blockchains move toward sustained rather than burst-driven workloads, storage efficiency becomes a foundational requirement for protocol performance. The design of Tidehunter reflects a shift toward infrastructure built explicitly for that next stage of scale, with further technical detail and deployment plans expected to follow.

The post Tidehunter: Sui’s Next-Generation Database Optimized For Low Latency And Reduced Write Amplification appeared first on Metaverse Post.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Shibarium May No Longer Turbocharge Shiba Inu Price Rally, Here’s Reason

Shibarium May No Longer Turbocharge Shiba Inu Price Rally, Here’s Reason

The post Shibarium May No Longer Turbocharge Shiba Inu Price Rally, Here’s Reason appeared on BitcoinEthereumNews.com. Shibarium, the layer-2 blockchain of the Shiba Inu (SHIB) ecosystem, is battling to stay active. Shibarium has slipped from hitting transaction milestones to struggling to record any transactions on its platform, a development that could severely impact SHIB. Shibarium transactions crash from millions to near zero As per Shibariumscan data, the total daily transactions on Shibarium as of Sept. 16 stood at 11,600. This volume of transactions reflects how low the transaction count has dropped for the L2, whose daily average ranged between 3.5 million and 4 million last month. However, in the last week of August, daily transaction volume on Shibarium lost momentum, slipping from 1.3 million to 9,590 as of Aug. 28. This pattern has lingered for much of September, with the highest peak so far being on Sept. 5, when it posted 1.26 million transactions. The low user engagement has greatly affected the transaction count in recent days. In addition, the security breach over the weekend by malicious attackers on Shibarium has probably worsened issues. Although developer Kaal Dhairya reassured the community that the attack to steal millions of BONE tokens was successfully prevented, users’ confidence appears shaken. This has also impacted the price outlook for Shiba Inu, the ecosystem’s native token. Following reports of the malicious attack on Shibarium, SHIB dipped immediately into the red zone. Unlike on previous occasions where investors accumulated on the dip, market participants did not flock to Shiba Inu. Shiba Inu price struggles, can burn mechanism help? With the current near-zero crash in transaction volume for Shibarium, SHIB’s price cannot depend on it to support a rally. It might take a while to rebuild user confidence and for transactions to pick up again. In the meantime, Shiba Inu might have to rely on other means to boost prices from its low levels. This…
Share
BitcoinEthereumNews2025/09/18 07:57
👨🏿‍🚀TechCabal Daily – When banks go cashless

👨🏿‍🚀TechCabal Daily – When banks go cashless

In today's edition: South Africa's biggest banks are going cashless || Onafriq and PAPSS pilot Naira wallet transfers from Nigeria to Ghana || South Africa just
Share
Techcabal2026/02/04 14:02
Wormhole launches reserve tying protocol revenue to token

Wormhole launches reserve tying protocol revenue to token

The post Wormhole launches reserve tying protocol revenue to token appeared on BitcoinEthereumNews.com. Wormhole is changing how its W token works by creating a new reserve designed to hold value for the long term. Announced on Wednesday, the Wormhole Reserve will collect onchain and offchain revenues and other value generated across the protocol and its applications (including Portal) and accumulate them into W, locking the tokens within the reserve. The reserve is part of a broader update called W 2.0. Other changes include a 4% targeted base yield for tokenholders who stake and take part in governance. While staking rewards will vary, Wormhole said active users of ecosystem apps can earn boosted yields through features like Portal Earn. The team stressed that no new tokens are being minted; rewards come from existing supply and protocol revenues, keeping the cap fixed at 10 billion. Wormhole is also overhauling its token release schedule. Instead of releasing large amounts of W at once under the old “cliff” model, the network will shift to steady, bi-weekly unlocks starting October 3, 2025. The aim is to avoid sharp periods of selling pressure and create a more predictable environment for investors. Lockups for some groups, including validators and investors, will extend an additional six months, until October 2028. Core contributor tokens remain under longer contractual time locks. Wormhole launched in 2020 as a cross-chain bridge and now connects more than 40 blockchains. The W token powers governance and staking, with a capped supply of 10 billion. By redirecting fees and revenues into the new reserve, Wormhole is betting that its token can maintain value as demand for moving assets and data between chains grows. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/wormhole-launches-reserve
Share
BitcoinEthereumNews2025/09/18 01:55