The culprit behind SeaTunnel Kafka Connector "OutOfMemory" found.The culprit behind SeaTunnel Kafka Connector "OutOfMemory" found.

The One Line of Code That Ate 12GB of SeaTunnel Kafka Connector's Memory in 5 Minutes

2025/09/12 13:30
2 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

\

What happened?

In Apache SeaTunnel version 2.3.9, the Kafka connector implementation contained a potential memory leak risk. When users configured streaming jobs to read data from Kafka, even with a read rate limit (read_limit.rows_per_second) set, the system could still experience continuous memory growth until an OOM (Out Of Memory) occurred.

What's the key issue?

In real deployments, users observed the following phenomena:

  1. Running a Kafka-to-HDFS streaming job on an 8-core, 12G memory SeaTunnel Engine cluster
  2. Although read_limit.rows_per_second=1 was configured, memory usage soared from 200MB to 5GB within 5 minutes
  3. After stopping the job, memory was not released; upon resuming, memory kept growing until OOM
  4. Ultimately, worker nodes restarted

Root Cause Analysis

Through code review, it was found that the root cause lay in the createReader method of the KafkaSource class, where elementsQueue was initialized as an unbounded queue:

elementsQueue = new LinkedBlockingQueue<>(); 

This implementation had two critical issues:

  1. Unbounded Queue: LinkedBlockingQueue without a specified capacity can theoretically grow indefinitely. When producer speed far exceeds consumer speed, memory continuously grows.
  2. Ineffective Rate Limiting: Although users configured read_limit.rows_per_second=1, this limit did not actually apply to Kafka data reading, causing data to accumulate in the memory queue.

Solution

The community resolved this issue via PR #9041. The main improvements include:

  1. Introducing a Bounded Queue: Replacing LinkedBlockingQueue with a fixed-size ArrayBlockingQueue
  2. Configurable Queue Size: Adding a queue.size configuration parameter, allowing users to adjust as needed
  3. Safe Default Value: Setting DEFAULT_QUEUE_SIZE=1000 as the default queue capacity

Core implementation changes:

public class KafkaSource {     private static final String QUEUE_SIZE_KEY = "queue.size";     private static final int DEFAULT_QUEUE_SIZE = 1000;      public SourceReader<SeaTunnelRow, KafkaSourceSplit> createReader(             SourceReader.Context readerContext) {         int queueSize = kafkaSourceConfig.getInt(QUEUE_SIZE_KEY, DEFAULT_QUEUE_SIZE);         BlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>> elementsQueue =                  new ArrayBlockingQueue<>(queueSize);         // ...     } } 

Best Practice Recommendations

For users of the SeaTunnel Kafka connector, it is recommended to:

  1. Upgrade Version: Use the SeaTunnel version containing this fix
  2. Configure Properly: Set an appropriate queue.size value according to business needs and data characteristics
  3. Monitor Memory: Even with a bounded queue, monitor system memory usage
  4. Understand Rate Limiting: The read_limit.rows_per_second parameter applies to downstream processing, not Kafka consumption

Summary

This fix not only resolved the memory overflow risk but also improved system stability and configurability. By introducing bounded queues and configurable parameters, users can better control system resource usage and avoid OOM caused by data backlog. It also reflects the virtuous cycle of open-source communities continuously improving product quality through user feedback.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Warning Signs Flash for $XRP as $1.34 Support Gets Tested

Warning Signs Flash for $XRP as $1.34 Support Gets Tested

XRP lagged behind Bitcoin and Ethereum during yesterday’s relief bounce. In fact, BTC and ETH pushed higher over $70K and $2K respectively, but XRP only managed
Share
Captainaltcoin2026/03/03 14:49
PiDex Testnet Launch: What It Means for Pi Network and Picoin Value

PiDex Testnet Launch: What It Means for Pi Network and Picoin Value

Pi Network’s decentralized exchange, PiDex, went live on the testnet, marking a significant technical milestone for the ecosystem. Despite the launch, the m
Share
Hokanews2026/03/03 14:27
UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

The post UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future appeared on BitcoinEthereumNews.com. Key Highlights Microsoft and Google pledge billions as part of UK US tech partnership Nvidia to deploy 120,000 GPUs with British firm Nscale in Project Stargate Deal positions UK as an innovation hub rivaling global tech powers UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future The UK and the US have signed a “Technological Prosperity Agreement” that paves the way for joint projects in artificial intelligence, quantum computing, and nuclear energy, according to Reuters. Donald Trump and King Charles review the guard of honour at Windsor Castle, 17 September 2025. Image: Kirsty Wigglesworth/Reuters The agreement was unveiled ahead of U.S. President Donald Trump’s second state visit to the UK, marking a historic moment in transatlantic technology cooperation. Billions Flow Into the UK Tech Sector As part of the deal, major American corporations pledged to invest $42 billion in the UK. Microsoft leads with a $30 billion investment to expand cloud and AI infrastructure, including the construction of a new supercomputer in Loughton. Nvidia will deploy 120,000 GPUs, including up to 60,000 Grace Blackwell Ultra chips—in partnership with the British company Nscale as part of Project Stargate. Google is contributing $6.8 billion to build a data center in Waltham Cross and expand DeepMind research. Other companies are joining as well. CoreWeave announced a $3.4 billion investment in data centers, while Salesforce, Scale AI, BlackRock, Oracle, and AWS confirmed additional investments ranging from hundreds of millions to several billion dollars. UK Positions Itself as a Global Innovation Hub British Prime Minister Keir Starmer said the deal could impact millions of lives across the Atlantic. He stressed that the UK aims to position itself as an investment hub with lighter regulations than the European Union. Nvidia spokesman David Hogan noted the significance of the agreement, saying it would…
Share
BitcoinEthereumNews2025/09/18 02:22