Building for large systems and long-running background jobs.Credit: Ilias Chebbi on Unsplash Months ago, I assumed the role that required building infrastBuilding for large systems and long-running background jobs.Credit: Ilias Chebbi on Unsplash Months ago, I assumed the role that required building infrast

Building Spotify for Sermons.

2025/12/11 21:15

Building for large systems and long-running background jobs.

Credit: Ilias Chebbi on Unsplash

Months ago, I assumed the role that required building infrastructure for media(audio) streaming. But beyond serving audio as streamable chunks, there were long-running media processing jobs and an extensive RAG pipeline that catered to transcription, transcoding, embedding, and sequential media updates. Building an MVP with a production mindset had us reiterate till we achieved a seamless system. Our approach has been one where we integrated features and the underlying stack of priorities.

Of Primary concern:

Over the course of building, each iteration came as a response to immediate and often “encompassing” need. Initial concern was queuing jobs, which readily sufficed with Redis; we simply fired and forgot. Bull MQ in the NEST JS framework gave us an even better control over retries, backlogs, and the dead-letter queue. Locally and with a few payloads in production, we got the media flow right. We were soon burdened by the weight of Observability:
Logs → Record of jobs (requests, responses, errors).
Metrics → How much / how often these jobs run, fail, complete, etc.
Traces → The path a job took across services (functions/methods called within the flow path).

You can solve some of these by designing APIs and building a custom dashboard to plug them into, but the problem of scalability will suffice. And in fact, we did design the APIs.

Building for Observability

The challenge of managing complex, long-running backend workflows, where failures must be recoverable, and state must be durable, Inngest became our architectural salvation. It fundamentally reframed our approach: each long-running background job becomes a background function, triggered by a specific event.

For instance, an Transcription.request event will trigger a TranscribeAudio function. This function might contain step-runs for: fetch_audio_metadata, deepgram_transcribe, parse_save_trasncription, and notify_user.

Deconstructing the Workflow: The Inngest Function and Step-runs

The core durability primitive is the step-runs. A background function is internally broken down into these step-runs, each containing a minimal, atomic block of logic.

  • Atomic Logic: A function executes your business logic step by step. If a step fails, the state of the entire run is preserved, and the run can be retried. This restarts the function from the beginning. Individual steps or step-runs cannot be retried in isolation.
  • Response Serialization: A step-run is defined by its response. This response is automatically serialized, which is essential for preserving complex or strongly-typed data structures across execution boundaries. Subsequent step-runs can reliably parse this serialized response, or logic can be merged into a single step for efficiency.
  • Decoupling and Scheduling: Within a function, we can conditionally queue or schedule new, dependent events, enabling complex fan-out/fan-in patterns and long-term scheduling up to a year. Errors and successes at any point can be caught, branched, and handled further down the workflow.

Inngest function abstract:

import { inngest } from 'inngest-client';

export const createMyFunction = (dependencies) => {
return inngest.createFunction(
{
id: 'my-function',
name: 'My Example Function',
retries: 3, // retry the entire run on failure
concurrency: { limit: 5 },
onFailure: async ({ event, error, step }) => {
// handle errors here
await step.run('handle-error', async () => {
console.error('Error processing event:', error);
});
},
},
{ event: 'my/event.triggered' },
async ({ event, step }) => {
const { payload } = event.data;

// Step 1: Define first step
const step1Result = await step.run('step-1', async () => {
// logic for step 1
return `Processed ${payload}`;
});

// Step 2: Define second step
const step2Result = await step.run('step-2', async () => {
// logic for step 2
return step1Result + ' -> step 2';
});

// Step N: Continue as needed
await step.run('final-step', async () => {
// finalization logic
console.log('Finished processing:', step2Result);
});

return { success: true };
},
);
};

The event-driven model of Inngest provides granular insight into every workflow execution:

  • Comprehensive Event Tracing: Every queued function execution is logged against its originating event. This provides a clear, high-level trail of all activities related to a single user action.
  • Detailed Run Insights: For each function execution (both successes and failures), Inngest provides detailed logs via its ack (acknowledge) and nack (negative acknowledgment) reporting. These logs include error stack traces, full request payloads, and the serialized response payloads for every individual step-run.
  • Operational Metrics: Beyond logs, we gained critical metrics on function health, including success rates, failure rates, and retry count, allowing us to continuously monitor the reliability and latency of our distributed workflows.

Building for Resilience

The caveat to relying on pure event processing is that while Inngest efficiently queues function executions, the events themselves are not internally queued in a traditional messaging broker sense. This absence of an explicit event queue can be problematic in high-traffic scenarios due to potential race conditions or dropped events if the ingestion endpoint is overwhelmed.

To address this and enforce strict event durability, we implemented a dedicated queuing system as a buffer.

AWS Simple Queue System (SQS) was the system of choice (though any robust queuing system is doable), given our existing infrastructure on AWS. We architected a two-queue system: a Main Queue and a Dead Letter Queue (DLQ).

We established an Elastic Beanstalk (EB) Worker Environment specifically configured to consume messages directly from the Main Queue. If a message in the Main Queue fails to be processed by the EB Worker a set number of times, the Main Queue automatically moves the failed message to the dedicated DLQ. This ensures no event is lost permanently if it fails to trigger or be picked up by Inngest. This worker environment differs from a standard EB web server environment, as its sole responsibility is message consumption and processing (in this case, forwarding the consumed message to the Inngest API endpoint).

UNDERSTANDING LIMITS AND SPECIFICATIONS

An understated and rather pertinent part of building enterprise-scale infrastructure is that it consumes resources, and they are long-running. Microservices architecture provides scalability per service. Storage, RAM, and timeouts of resources will come into play. Our specification for AWS instance type, for example, moved quickly from t3.micro to t3.small, and is now pegged at t3.medium. For long-running, CPU-intensive background jobs, horizontal scaling with tiny instances fails because the bottleneck is the time it takes to process a single job, not the volume of new jobs entering the queue.

Jobs or functions like transcoding, embedding are typically CPU-bound and Memory-bound. CPU-bound because they require sustained, intense CPU usage, and Memory-Bound because they often require substantial RAM to load large models or handle large files or payloads efficiently.

Ultimately, this augmented architecture, placing the durability of SQS and the controlled execution of an EB Worker environment directly upstream of the Inngest API, provided essential resiliency. We achieved strict event ownership, eliminated race conditions during traffic spikes, and gained a non-volatile dead letter mechanism. We leveraged Inngest for its workflow orchestration and debugging capabilities, while relying on AWS primitives for maximum message throughput and durability. The resulting system is not only scalable but highly auditable, successfully translating complex, long-running backend jobs into secure, observable, and failure-tolerant micro-steps.


Building Spotify for Sermons. was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis

Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis

Egrag Crypto forecasts XRP reaching $6 to $7 by November. Fractal pattern analysis suggests a significant XRP price surge soon. XRP poised for potential growth based on historical price patterns. The cryptocurrency community is abuzz after renowned analyst Egrag Crypto shared an analysis suggesting that XRP could reach $6 to $7 by mid-November. This prediction is based on the study of a fractal pattern observed in XRP’s past price movements, which the analyst believes is likely to repeat itself in the coming months. According to Egrag Crypto, the analysis hinges on fractal patterns, which are used in technical analysis to identify recurring market behavior. Using the past price charts of XRP, the expert has found a certain fractal that looks similar to the existing market structure. The trend indicates that XRP will soon experience a great increase in price, and the asset will probably reach the $6 or $7 range in mid-November. The chart shared by Egrag Crypto points to a rising trend line with several Fibonacci levels pointing to key support and resistance zones. This technical structure, along with the fractal pattern, is the foundation of the price forecast. As XRP continues to follow the predicted trajectory, the analyst sees a strong possibility of it reaching new highs, especially if the fractal behaves as expected. Also Read: Why XRP Price Remains Stagnant Despite Fed Rate Cut #XRP – A Potential Similar Set-Up! I've been analyzing the yellow fractal from a previous setup and trying to fit it into various formations. Based on the fractal formation analysis, it suggests that by mid-November, #XRP could be around $6 to $7! Fractals can indeed be… pic.twitter.com/HmIlK77Lrr — EGRAG CRYPTO (@egragcrypto) September 18, 2025 Fractal Analysis: The Key to XRP’s Potential Surge Fractals are a popular tool for market analysis, as they can reveal trends and potential price movements by identifying patterns in historical data. Egrag Crypto’s focus on a yellow fractal pattern in XRP’s price charts is central to the current forecast. Having contrasted the market scenario at the current period and how it was at an earlier time, the analyst has indicated that XRP might revert to the same price scenario that occurred at a later cycle in the past. Egrag Crypto’s forecast of $6 to $7 is based not just on the fractal pattern but also on broader market trends and technical indicators. The Fibonacci retracements and extensions will also give more insight into the price levels that are likely to be experienced in the coming few weeks. With mid-November in sight, XRP investors and traders will be keeping a close eye on the market to see if Egrag Crypto’s analysis is true. If the price targets are reached, XRP could experience one of its most significant rallies in recent history. Also Read: Top Investor Issues Advance Warning to XRP Holders – Beware of this Risk The post Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis appeared first on 36Crypto.
Share
Coinstats2025/09/18 18:36
Moto completes $1.8 million pre-seed funding round for its Solana eco-credit card project.

Moto completes $1.8 million pre-seed funding round for its Solana eco-credit card project.

PANews reported on December 17th that Moto, an on-chain credit card project, announced the completion of a $1.8 million Pre-Seed funding round, led by Eterna Capital
Share
PANews2025/12/17 22:15
Why Investors Choose Pepeto As 2025’s Best Crypto: The Next Bitcoin Story

Why Investors Choose Pepeto As 2025’s Best Crypto: The Next Bitcoin Story

Desks still pass that story around because it’s proof that one coin can change everything. And the question that always […] The post Why Investors Choose Pepeto As 2025’s Best Crypto: The Next Bitcoin Story appeared first on Coindoo.
Share
Coindoo2025/09/18 04:39