When most startups were spinning up servers, Enterpret went all-in on serverless. Today, that architecture powers a massive AI platform for customer-feedback analysis, helping companies like Canva, Atlassian, Perplexity, and Notion stay close to their users.When most startups were spinning up servers, Enterpret went all-in on serverless. Today, that architecture powers a massive AI platform for customer-feedback analysis, helping companies like Canva, Atlassian, Perplexity, and Notion stay close to their users.

How Enterpret built a scalable AI platform with just two engineers

2025/12/01 21:26

When most early-stage SaaS startups were provisioning servers and planning Kubernetes clusters, Enterpret quietly went the other way. The team, barely three people at the time, decided to build an enterprise-grade AI feedback platform primarily on AWS Lambda.

It was a contrarian bet. Five years later, it remains one of the foundational decisions that shaped the company’s architecture, culture and speed.

A constraint-driven beginning

In the earliest days, Enterpret needed to ingest immense volumes of customer feedback data — bursts of text, context, and metadata coming in waves whenever a client synced historical data or when a public event went viral. The heavy compute sat on ingestion and enrichment; the actual user-facing queries were comparatively light.

Capital was scarce, engineering capacity even more so. “We didn’t have the luxury of always-on compute. Maintaining clusters wasn’t realistic with two engineers and an intern,” Chief Architect Anshal Dwivedi recalls.

Lambda offered something traditional compute couldn’t: elasticity without cost drag. You paid only when something ran. Idle was free.

Enterpret launched with eight microservices and around 35 Lambda functions, a small surface area, but fast to evolve. It allowed the team to move with urgency, without burning runway on infrastructure.

What made the decision notable wasn’t the early commitment to serverless; it was how deliberately it engineered an exit ramp. If workloads ever demanded something more persistent, migrating to ECS would require little more than swapping a deployment wrapper. The business logic would remain untouched.

That foresight would turn out to be one of the most important choices the team made.

The monorepo that kept the system coherent

As the product footprint expanded, Enterpret faced a new problem: managing growth without splintering the codebase. Its response was another decision that goes against conventional startup advice — a single Go monorepo for every backend microservice, shared library, and infrastructure configuration.

Rather than chaos, it delivered consistency.

A model change could be made once, reviewed once, and deployed everywhere. Error codes, logging formats, and tracing standards remained uniform across services — a blessing in a distributed system where debugging normally involves spelunking across repos and log streams.

Refactoring became routine, not risky. IDE-level type-checking guarded against silent breakage. Deployments stayed predictable.

That same monorepo now houses 26 services, up from the original eight. Deployments happen several times a week, with the team moving quickly because the underlying structure never fractured.

A lightweight RPC layer that still holds up

Very early on, the team ran into a limitation: AWS API Gateway didn’t support gRPC natively, yet Enterpret needed a compact, binary-first communication layer suited for Lambda.

The typical path would have involved workarounds or adoption of heavier frameworks. Instead, they built a lean RPC abstraction that supported multiple encodings — protobuf over HTTP for efficiency, JSON for flexibility, and compatibility for gRPC downstream.

It took a few days to shape, not months. Yet it remains the backbone of service communication even now. Compression, distributed tracing, metrics and client generation were layered on without touching individual services — the compounding effect the team now optimises for deliberately.

When Lambda stopped being the right answer

Growth eventually revealed the limits of serverless.

Frontend analytics surfaced the first crack; cold starts added noticeable latency when dashboards fired dozens of parallel queries. Provisioned concurrency would have reduced the lag, but not without making the system expensive to run. Migrating those workloads to ECS brought the P95 down and costs along with it.

Long-running jobs followed. Lambda’s 15-minute cap worked for most async tasks, but report generation and exports needed more breathing room. Enterpret turned to AWS Batch backed by spot instances, achieving the same flexibility at a fraction of the cost.

There were other restrictions too such as Lambda’s 6MB payload cap, API Gateway’s 29-second timeout. The team routed around these with S3-based response offloading and request batching, but the lesson was clear: the right tool changes over time.

Because of how the team architected the system, migration was rarely a rewrite. Often, it was an hour.

Cost discipline as philosophy

In a bootstrapped-speed phase, cost is not a metric but a survival constraint. Enterpret audited everything: memory allocation, idle compute, cold starts, cross-service chatter. Many Lambda functions still run on 128MB, made possible by Go’s efficiency.

At one point, a CloudWatch bill eclipsed total compute spend. It prompted stricter observability hygiene, alerting thresholds, billing reviews and architecture choices rooted not in idealism but in operational reality.

The discipline stuck.

The playbook Enterpret now gives others

Looking back, Dwivedi says the company would make the same choices again. Serverless gave the speed, cost control, and focus when the team needed it most. The monorepo, the RPC abstraction, the migration-ready design, all of it would stay the same.

But the company would be more cautious about force-fitting workloads that don't belong on Lambda. Earlier, one of its data collection services required long-running execution, so the team stitched it together with AWS Step Functions and checkpointing logic to bypass the timeout. It worked, but maintaining it was painful. AWS Batch would have been the right call from day one.

His advice to other engineering teams boils down to a few principles:

Keep infrastructure dead simple. Enterpret didn't host a single piece of infrastructure itself for four years. Managed services and boring technology beat clever solutions every time. "The startups that survive aren't the ones with clever infrastructure; they're the ones that stayed focused on their product while the cloud did the heavy lifting," Dwivedi notes.

Be ruthless about cost. It directly impacts runway. Set spending alerts, review bills weekly, question every line item. Small leaks compound into hemorrhages.

Design for horizontal scale from day one. The perceived effort gap between "quick-and-dirty" and "scalable" is often an illusion. A few good abstractions and clear service boundaries take marginally more time upfront but save you from rewrites later.

Don't chase cloud agnosticism too early. Enterpret committed fully to AWS. When you constrain yourself to what works everywhere, you're optimizing for the lowest common denominator. You get better systems by embracing what your cloud does best, not what every cloud does adequately.

Five years on, the architecture still holds.

The journey continues

Today, Enterpret processes hundreds of millions of customer feedback records. Many of the systems the team wrote in the first year are still running—not just running, but thriving. They've evolved, scaled, and adapted because the team found that compounding threshold early and stuck to it.

The company is now building agentic architectures, pushing into new territories of what AI can do with customer feedback. The landscape keeps evolving, and the team is still learning what works.

"Some patterns from our serverless journey translate beautifully. Others need rethinking entirely," shared Dwivedi.

The lesson isn't that serverless is the answer for everyone. It's that small, thoughtful decisions compound over time. Design systems that evolve rather than expand. Choose clarity over cleverness. And when you hit the limits of a technology, migrate; don't rewrite.

This is just a glimpse of what Enterpret builds. Read more about the same on their engineering blog here.

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03709
$0.03709$0.03709
-3.10%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Visa Expands USDC Stablecoin Settlement For US Banks

Visa Expands USDC Stablecoin Settlement For US Banks

The post Visa Expands USDC Stablecoin Settlement For US Banks appeared on BitcoinEthereumNews.com. Visa Expands USDC Stablecoin Settlement For US Banks
Paylaş
BitcoinEthereumNews2025/12/17 15:23
Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

The live-streaming and e-commerce company has struck a deal to acquire 7,500 BTC, instantly becoming one of the largest public […] The post Nasdaq Company Adds 7,500 BTC in Bold Treasury Move appeared first on Coindoo.
Paylaş
Coindoo2025/09/18 02:15
Curve Finance votes on revenue-sharing model for CRV holders

Curve Finance votes on revenue-sharing model for CRV holders

The post Curve Finance votes on revenue-sharing model for CRV holders appeared on BitcoinEthereumNews.com. Curve Finance has proposed a new protocol called Yield Basis that would share revenue directly with CRV holders, marking a shift from one-off incentives to sustainable income. Summary Curve Finance has put forward a revenue-sharing protocol to give CRV holders sustainable income beyond emissions and fees. The plan would mint $60M in crvUSD to seed three Bitcoin liquidity pools (WBTC, cbBTC, tBTC), with 35–65% of revenue distributed to veCRV stakers. The DAO vote runs from up to Sept. 24, with the proposal seen as a major step to strengthen CRV tokenomics after past liquidity and governance challenges. Curve Finance founder Michael Egorov has introduced a proposal to give CRV token holders a more direct way to earn income, launching a system called Yield Basis that aims to turn the governance token into a sustainable, yield-bearing asset.  The proposal has been published on the Curve DAO (CRV) governance forum, with voting open until Sept. 24. A new model for CRV rewards Yield Basis is designed to distribute transparent and consistent returns to CRV holders who lock their tokens for veCRV governance rights. Unlike past incentive programs, which relied heavily on airdrops and emissions, the protocol channels income from Bitcoin-focused liquidity pools directly back to token holders. To start, Curve would mint $60 million worth of crvUSD, its over-collateralized stablecoin, with proceeds allocated across three pools — WBTC, cbBTC, and tBTC — each capped at $10 million. 25% of Yield Basis tokens would be reserved for the Curve ecosystem, and between 35% and 65% of Yield Basis’s revenue would be given to veCRV holders. By emphasizing Bitcoin (BTC) liquidity and offering yields without the short-term loss risks associated with automated market makers, the protocol hopes to draw in professional traders and institutions. Context and potential impact on Curve Finance The proposal comes as Curve continues to modify…
Paylaş
BitcoinEthereumNews2025/09/18 14:37