Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.

Goodbye Manual Monitoring: How AIOps Spots Problems Before You Do

Limitation of the Traditional Monitoring

The management of modern distributed applications has become increasingly complex. Using traditional monitoring tools, which rely mainly on manual analysis, is insufficient for ensuring the availability and performance demanded by microservices or serverless topologies.

One of the main problems with traditional monitoring is the high volume and variety of telemetry data generated by IT environments. This includes metrics, logs, and traces, which in an ideal world should be consolidated on a single monitoring dashboard to allow observation of the entire system. Another problem is static thresholds for alarms. Setting them too low will generate a high volume of false positives, while setting them too high will fail to detect significant performance degradation.

To solve these problems, organizations are shifting to an intelligent, automated, and predictive solution known as AIOps. Instead of relying on human operators to manually connect the dots, AIOps platforms are designed to ingest and analyze these vast datasets in real time.

In this article, we will learn how AIOps platforms are capable of proactive anomaly detection—its most fundamental capability - as well as root cause analysis, prediction, and alert generation.


The Technology Stack

The solution detailed in this article is a combination of three synergistic pillars:

  1. A managed AIOps platform that provides analytical intelligence. We will use AWS Guru, which is the core of our solution and acts as its "AIOps brain." AWS Guru is a managed service that leverages machine learning models built and trained by AWS experts. A key design principle is to make AIOps accessible to specialists without special machine learning expertise. Its primary function is to detect operational issues or anomalies and produce high-level insights instead of a stream of raw, uncorrelated alerts. These insights include related log snippets, a detailed analysis with a possible root cause, and actionable steps to diagnose and remediate the issue.
  2. An Open-Standard observability framework that supplies high-quality telemetry data and provides a unified set of APIs, SDKs, and tools to generate, collect, and export it. The importance of OpenTelemetry lies in two principles: standardization and vendor neutrality. The benefit of using OpenTelemetry is that if we want to switch to a different AIOps tool, we can just redirect the telemetry stream.
  3. A Serverless Application that is an example of a modern and dynamic microservice topology.

The complete architectural solution for a proposed telemetry pipeline can be observed on the below diagram.

Practical Implementation

It’s important to understand that AWS Guru does not collect any telemetry data itself but is configured to monitor and continuously analyze resources produced by the Application and identified by specific tags.

To give a reader a better understanding in this section we provide a comprehensive guide on how to implement the proposed solution and further in the Experiment section we will see on how to instrument it. The following structure of a git repository aligns with IAC best practices:

. ├── demo │   ├── envs │   │   └── dev │   │   ├── env.hcl # Environment-specific configuration that sets the environment name │   │   ├── api_gateway │   │   │   └── terragrunt.hcl │   │   ├── devopsguru │   │   │   └── terragrunt.hcl │   │   ├── dynamodb │   │   │   └── terragrunt.hcl │   │   ├── iam │   │   │   └── terragrunt.hcl │   │   └── serverless_app │   │   └── terragrunt.hcl │   └── project.hcl # Project-level configuration defining `app_name_prefix` and `project_name` used across all environments ├── root.hcl # Root Terragrunt configuration that generates AWS provider blocks and configures S3 backend ├── src │   ├── app.py # Lambda handler function with OpenTelemetry instrumentation │   ├── requirements.txt │   └── collector.yaml └── terraform └── modules # Infrastructure Modules ├── api_gateway ├── devopsguru ├── dynamodb └── iam

:::info This Modular (Terragrunt) Approach has the following Benefits:

  • True environment isolation: each environment (dev, prod, etc.) has its own state, config, and outputs.
  • All major AWS resources (Lambda, API Gateway, DynamoDB, IAM, DevOps Guru) are reusable Terraform modules in terraform/modules/.
  • Easy to extend for new AWS services or environments with minimal duplication.

:::

:::tip The full repository can be found here: https://github.com/kirPoNik/aws-aiops-detection-with-guru​

:::

The Lambda function (code in app.py) receives requests from API Gateway, generates an unique ID and put an item to the Dynamo DB Table. It also contains the logic to inject a "gray failure", which will be required for our experiment, see the code snipped with the Key Logic below:

import os import time import random import boto3 import uuid # --- CONFIGURATION FOR GRAY FAILURE SIMULATION --- # This environment variable acts as our feature flag for the experiment INJECT_LATENCY = os.environ.get("INJECT_LATENCY", "false").lower() == "true" MIN_LATENCY_MS = 150 # Minimum artificial latency in milliseconds MAX_LATENCY_MS = 500 # Maximum artificial latency in milliseconds def handler(event, context): """ Handles requests and optionally injects a variable sleep to simulate performance degradation. """ # This is the core logic for our "gray failure" simulation if INJECT_LATENCY: latency_seconds = random.randint(MIN_LATENCY_MS, MAX_LATENCY_MS) / 1000.0 time.sleep(latency_seconds) # The function's primary business logic is to write an item to DynamoDB try: table.put_item( Item={ "id": str(uuid.uuid4()), "created_at": int(time.time()) } ) # ... returns a successful response ... except Exception as e: # ... returns an error response ...

and the collector configuration ( in collector.yaml), that defines pipelines to send traces to AWS X-Ray and metrics to Amazon CloudWatch, see the Key Logic below:

# This file configures the OTel Collector in the ADOT layer exporters: # Send trace data to AWS X-Ray awsxray: # Send metrics to CloudWatch using the Embedded Metric Format (EMF) awsemf: service: pipelines: # The pipeline for traces: receive data -> export to X-Ray traces: receivers: [otlp] exporters: [awsxray] # The pipeline for metrics: receive data -> export to CloudWatch metrics: receivers: [otlp] exporters: [awsemf]

Simulating Failure and Generating Insights

:::info The Experiment section

:::

Step 1: Deploy the Stack

In the demo/envs/dev directory, run the usual commands:

terragrunt init --all terragrunt plan --all terragrunt apply --all

Grab the API endpoint from the output and save it.

export API_URL=$(terragrunt output -json --all \ | jq -r 'to_entries[] | select(.key \ | test("api_endpoint")) | .value.value')

:::tip You need to enable AWS DevOps Guru and wait 15-90 minutes for Discovering applications and resources

:::

Step 2: Establish a Baseline

DevOps Guru needs to learn what "normal" looks like. Let's give it some healthy traffic. We'll use hey, a simple load testing tool perfect for this job.

Run a light load for a few hours. This gives the ML models plenty of data to build a solid baseline.

# Run for 4 hours at 5 requests per second hey -z 4h -q 5 -m POST "$API_URL"

:::tip Use GNU Screen to run this in background

:::

Step 3: Inject the Failure

Now for the fun part. We'll introduce our "gray failure" - a subtle slowdown that a simple threshold alarm would likely miss.

In demo/envs/dev/serverless_app/terragrunt.hcl, add a new INJECT_LATENCY to our Lambda function's environment variable:

environment_variables = { TABLE_NAME = dependency.dynamodb.outputs.table_name AWS_LAMBDA_EXEC_WRAPPER = "/opt/otel-instrument" OPENTELEMETRY_COLLECTOR_CONFIG_URI = "/var/task/collector.yaml" INJECT_LATENCY = "true" # <-- Change this to true }

Apply the change. This quick deployment is an important event that DevOps Guru will notice.

terragrunt apply --all

Step 4: Generate Bad Traffic

Run the same load test again. This time, every request will have that extra, variable delay.

# Run for at least an hour to generate enough bad data hey -z 1h -q 5 -m POST "$API_URL"

Our app is now performing worse than its baseline. Let's see if DevOps Guru noticed.

After 30-60 minutes of bad traffic, an "insight" popped up in the DevOps Guru console.

This is the real value of AIOps. A standard CloudWatch alarm would have just said, "Latency is high." DevOps Guru said, "Latency is high, and it started right after you deployed this change."

Conclusion

This experiment shows a clear path away from reactive firefighting. By pairing a standard observability framework like OpenTelemetry with an AIOps engine like AWS DevOps Guru, we can build systems that help us find and fix problems before they become disasters.

The big takeaway is correlation. The magic wasn't just spotting the latency spike; it was automatically linking it to the deployment. That's the jump from raw data to real insight.

The future of ops isn't about more dashboards. It's about fewer, smarter alerts that tell you what's wrong, why it's wrong, and how to fix it.

Resources

  • Github Repository: https://github.com/kirPoNik/aws-aiops-detection-with-guru
  • AWS DevOps Guru Official Page
  • OpenTelemetry Official Documentation:
  • AWS Distro for OpenTelemetry (ADOT) for Lambda
  • hey - HTTP Load Generator:

\

Market Opportunity
DeepBook Logo
DeepBook Price(DEEP)
$0.034966
$0.034966$0.034966
-0.78%
USD
DeepBook (DEEP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

The post Polygon Tops RWA Rankings With $1.1B in Tokenized Assets appeared on BitcoinEthereumNews.com. Key Notes A new report from Dune and RWA.xyz highlights Polygon’s role in the growing RWA sector. Polygon PoS currently holds $1.13 billion in RWA Total Value Locked (TVL) across 269 assets. The network holds a 62% market share of tokenized global bonds, driven by European money market funds. The Polygon POL $0.25 24h volatility: 1.4% Market cap: $2.64 B Vol. 24h: $106.17 M network is securing a significant position in the rapidly growing tokenization space, now holding over $1.13 billion in total value locked (TVL) from Real World Assets (RWAs). This development comes as the network continues to evolve, recently deploying its major “Rio” upgrade on the Amoy testnet to enhance future scaling capabilities. This information comes from a new joint report on the state of the RWA market published on Sept. 17 by blockchain analytics firm Dune and data platform RWA.xyz. The focus on RWAs is intensifying across the industry, coinciding with events like the ongoing Real-World Asset Summit in New York. Sandeep Nailwal, CEO of the Polygon Foundation, highlighted the findings via a post on X, noting that the TVL is spread across 269 assets and 2,900 holders on the Polygon PoS chain. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 Key Trends From the 2025 RWA Report The joint publication, titled “RWA REPORT 2025,” offers a comprehensive look into the tokenized asset landscape, which it states has grown 224% since the start of 2024. The report identifies several key trends driving this expansion. According to…
Share
BitcoinEthereumNews2025/09/18 00:40
OpenVPP accused of falsely advertising cooperation with the US government; SEC commissioner clarifies no involvement

OpenVPP accused of falsely advertising cooperation with the US government; SEC commissioner clarifies no involvement

PANews reported on September 17th that on-chain sleuth ZachXBT tweeted that OpenVPP ( $OVPP ) announced this week that it was collaborating with the US government to advance energy tokenization. SEC Commissioner Hester Peirce subsequently responded, stating that the company does not collaborate with or endorse any private crypto projects. The OpenVPP team subsequently hid the response. Several crypto influencers have participated in promoting the project, and the accounts involved have been questioned as typical influencer accounts.
Share
PANews2025/09/17 23:58
Will XRP Price Increase In September 2025?

Will XRP Price Increase In September 2025?

Ripple XRP is a cryptocurrency that primarily focuses on building a decentralised payments network to facilitate low-cost and cross-border transactions. It’s a native digital currency of the Ripple network, which works as a blockchain called the XRP Ledger (XRPL). It utilised a shared, distributed ledger to track account balances and transactions. What Do XRP Charts Reveal? […]
Share
Tronweekly2025/09/18 00:00