Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.

Goodbye Manual Monitoring: How AIOps Spots Problems Before You Do

2025/10/22 12:45

Limitation of the Traditional Monitoring

The management of modern distributed applications has become increasingly complex. Using traditional monitoring tools, which rely mainly on manual analysis, is insufficient for ensuring the availability and performance demanded by microservices or serverless topologies.

One of the main problems with traditional monitoring is the high volume and variety of telemetry data generated by IT environments. This includes metrics, logs, and traces, which in an ideal world should be consolidated on a single monitoring dashboard to allow observation of the entire system. Another problem is static thresholds for alarms. Setting them too low will generate a high volume of false positives, while setting them too high will fail to detect significant performance degradation.

To solve these problems, organizations are shifting to an intelligent, automated, and predictive solution known as AIOps. Instead of relying on human operators to manually connect the dots, AIOps platforms are designed to ingest and analyze these vast datasets in real time.

In this article, we will learn how AIOps platforms are capable of proactive anomaly detection—its most fundamental capability - as well as root cause analysis, prediction, and alert generation.


The Technology Stack

The solution detailed in this article is a combination of three synergistic pillars:

  1. A managed AIOps platform that provides analytical intelligence. We will use AWS Guru, which is the core of our solution and acts as its "AIOps brain." AWS Guru is a managed service that leverages machine learning models built and trained by AWS experts. A key design principle is to make AIOps accessible to specialists without special machine learning expertise. Its primary function is to detect operational issues or anomalies and produce high-level insights instead of a stream of raw, uncorrelated alerts. These insights include related log snippets, a detailed analysis with a possible root cause, and actionable steps to diagnose and remediate the issue.
  2. An Open-Standard observability framework that supplies high-quality telemetry data and provides a unified set of APIs, SDKs, and tools to generate, collect, and export it. The importance of OpenTelemetry lies in two principles: standardization and vendor neutrality. The benefit of using OpenTelemetry is that if we want to switch to a different AIOps tool, we can just redirect the telemetry stream.
  3. A Serverless Application that is an example of a modern and dynamic microservice topology.

The complete architectural solution for a proposed telemetry pipeline can be observed on the below diagram.

Practical Implementation

It’s important to understand that AWS Guru does not collect any telemetry data itself but is configured to monitor and continuously analyze resources produced by the Application and identified by specific tags.

To give a reader a better understanding in this section we provide a comprehensive guide on how to implement the proposed solution and further in the Experiment section we will see on how to instrument it. The following structure of a git repository aligns with IAC best practices:

. ├── demo │   ├── envs │   │   └── dev │   │   ├── env.hcl # Environment-specific configuration that sets the environment name │   │   ├── api_gateway │   │   │   └── terragrunt.hcl │   │   ├── devopsguru │   │   │   └── terragrunt.hcl │   │   ├── dynamodb │   │   │   └── terragrunt.hcl │   │   ├── iam │   │   │   └── terragrunt.hcl │   │   └── serverless_app │   │   └── terragrunt.hcl │   └── project.hcl # Project-level configuration defining `app_name_prefix` and `project_name` used across all environments ├── root.hcl # Root Terragrunt configuration that generates AWS provider blocks and configures S3 backend ├── src │   ├── app.py # Lambda handler function with OpenTelemetry instrumentation │   ├── requirements.txt │   └── collector.yaml └── terraform └── modules # Infrastructure Modules ├── api_gateway ├── devopsguru ├── dynamodb └── iam

:::info This Modular (Terragrunt) Approach has the following Benefits:

  • True environment isolation: each environment (dev, prod, etc.) has its own state, config, and outputs.
  • All major AWS resources (Lambda, API Gateway, DynamoDB, IAM, DevOps Guru) are reusable Terraform modules in terraform/modules/.
  • Easy to extend for new AWS services or environments with minimal duplication.

:::

:::tip The full repository can be found here: https://github.com/kirPoNik/aws-aiops-detection-with-guru​

:::

The Lambda function (code in app.py) receives requests from API Gateway, generates an unique ID and put an item to the Dynamo DB Table. It also contains the logic to inject a "gray failure", which will be required for our experiment, see the code snipped with the Key Logic below:

import os import time import random import boto3 import uuid # --- CONFIGURATION FOR GRAY FAILURE SIMULATION --- # This environment variable acts as our feature flag for the experiment INJECT_LATENCY = os.environ.get("INJECT_LATENCY", "false").lower() == "true" MIN_LATENCY_MS = 150 # Minimum artificial latency in milliseconds MAX_LATENCY_MS = 500 # Maximum artificial latency in milliseconds def handler(event, context): """ Handles requests and optionally injects a variable sleep to simulate performance degradation. """ # This is the core logic for our "gray failure" simulation if INJECT_LATENCY: latency_seconds = random.randint(MIN_LATENCY_MS, MAX_LATENCY_MS) / 1000.0 time.sleep(latency_seconds) # The function's primary business logic is to write an item to DynamoDB try: table.put_item( Item={ "id": str(uuid.uuid4()), "created_at": int(time.time()) } ) # ... returns a successful response ... except Exception as e: # ... returns an error response ...

and the collector configuration ( in collector.yaml), that defines pipelines to send traces to AWS X-Ray and metrics to Amazon CloudWatch, see the Key Logic below:

# This file configures the OTel Collector in the ADOT layer exporters: # Send trace data to AWS X-Ray awsxray: # Send metrics to CloudWatch using the Embedded Metric Format (EMF) awsemf: service: pipelines: # The pipeline for traces: receive data -> export to X-Ray traces: receivers: [otlp] exporters: [awsxray] # The pipeline for metrics: receive data -> export to CloudWatch metrics: receivers: [otlp] exporters: [awsemf]

Simulating Failure and Generating Insights

:::info The Experiment section

:::

Step 1: Deploy the Stack

In the demo/envs/dev directory, run the usual commands:

terragrunt init --all terragrunt plan --all terragrunt apply --all

Grab the API endpoint from the output and save it.

export API_URL=$(terragrunt output -json --all \ | jq -r 'to_entries[] | select(.key \ | test("api_endpoint")) | .value.value')

:::tip You need to enable AWS DevOps Guru and wait 15-90 minutes for Discovering applications and resources

:::

Step 2: Establish a Baseline

DevOps Guru needs to learn what "normal" looks like. Let's give it some healthy traffic. We'll use hey, a simple load testing tool perfect for this job.

Run a light load for a few hours. This gives the ML models plenty of data to build a solid baseline.

# Run for 4 hours at 5 requests per second hey -z 4h -q 5 -m POST "$API_URL"

:::tip Use GNU Screen to run this in background

:::

Step 3: Inject the Failure

Now for the fun part. We'll introduce our "gray failure" - a subtle slowdown that a simple threshold alarm would likely miss.

In demo/envs/dev/serverless_app/terragrunt.hcl, add a new INJECT_LATENCY to our Lambda function's environment variable:

environment_variables = { TABLE_NAME = dependency.dynamodb.outputs.table_name AWS_LAMBDA_EXEC_WRAPPER = "/opt/otel-instrument" OPENTELEMETRY_COLLECTOR_CONFIG_URI = "/var/task/collector.yaml" INJECT_LATENCY = "true" # <-- Change this to true }

Apply the change. This quick deployment is an important event that DevOps Guru will notice.

terragrunt apply --all

Step 4: Generate Bad Traffic

Run the same load test again. This time, every request will have that extra, variable delay.

# Run for at least an hour to generate enough bad data hey -z 1h -q 5 -m POST "$API_URL"

Our app is now performing worse than its baseline. Let's see if DevOps Guru noticed.

After 30-60 minutes of bad traffic, an "insight" popped up in the DevOps Guru console.

This is the real value of AIOps. A standard CloudWatch alarm would have just said, "Latency is high." DevOps Guru said, "Latency is high, and it started right after you deployed this change."

Conclusion

This experiment shows a clear path away from reactive firefighting. By pairing a standard observability framework like OpenTelemetry with an AIOps engine like AWS DevOps Guru, we can build systems that help us find and fix problems before they become disasters.

The big takeaway is correlation. The magic wasn't just spotting the latency spike; it was automatically linking it to the deployment. That's the jump from raw data to real insight.

The future of ops isn't about more dashboards. It's about fewer, smarter alerts that tell you what's wrong, why it's wrong, and how to fix it.

Resources

  • Github Repository: https://github.com/kirPoNik/aws-aiops-detection-with-guru
  • AWS DevOps Guru Official Page
  • OpenTelemetry Official Documentation:
  • AWS Distro for OpenTelemetry (ADOT) for Lambda
  • hey - HTTP Load Generator:

\

Market Opportunity
DeepBook Logo
DeepBook Price(DEEP)
$0.038938
$0.038938$0.038938
-0.16%
USD
DeepBook (DEEP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Volante Technologies Customers Successfully Navigate Critical Regulatory Deadlines for EU SEPA Instant and Global SWIFT Cross-Border Payments

Volante Technologies Customers Successfully Navigate Critical Regulatory Deadlines for EU SEPA Instant and Global SWIFT Cross-Border Payments

PaaS leader ensures seamless migrations and uninterrupted payment operations LONDON–(BUSINESS WIRE)–Volante Technologies, the global leader in Payments as a Service
Share
AI Journal2025/12/16 17:16
Fed Acts on Economic Signals with Rate Cut

Fed Acts on Economic Signals with Rate Cut

In a significant pivot, the Federal Reserve reduced its benchmark interest rate following a prolonged ten-month hiatus. This decision, reflecting a strategic response to the current economic climate, has captured attention across financial sectors, with both market participants and policymakers keenly evaluating its potential impact.Continue Reading:Fed Acts on Economic Signals with Rate Cut
Share
Coinstats2025/09/18 02:28
Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Following the MCP and A2A protocols, the AI Agent market has seen another blockbuster arrival: the Agent Payments Protocol (AP2), developed by Google. This will clearly further enhance AI Agents' autonomous multi-tasking capabilities, but the unfortunate reality is that it has little to do with web3AI. Let's take a closer look: What problem does AP2 solve? Simply put, the MCP protocol is like a universal hook, enabling AI agents to connect to various external tools and data sources; A2A is a team collaboration communication protocol that allows multiple AI agents to cooperate with each other to complete complex tasks; AP2 completes the last piece of the puzzle - payment capability. In other words, MCP opens up connectivity, A2A promotes collaboration efficiency, and AP2 achieves value exchange. The arrival of AP2 truly injects "soul" into the autonomous collaboration and task execution of Multi-Agents. Imagine AI Agents connecting Qunar, Meituan, and Didi to complete the booking of flights, hotels, and car rentals, but then getting stuck at the point of "self-payment." What's the point of all that multitasking? So, remember this: AP2 is an extension of MCP+A2A, solving the last mile problem of AI Agent automated execution. What are the technical highlights of AP2? The core innovation of AP2 is the Mandates mechanism, which is divided into real-time authorization mode and delegated authorization mode. Real-time authorization is easy to understand. The AI Agent finds the product and shows it to you. The operation can only be performed after the user signs. Delegated authorization requires the user to set rules in advance, such as only buying the iPhone 17 when the price drops to 5,000. The AI Agent monitors the trigger conditions and executes automatically. The implementation logic is cryptographically signed using Verifiable Credentials (VCs). Users can set complex commission conditions, including price ranges, time limits, and payment method priorities, forming a tamper-proof digital contract. Once signed, the AI Agent executes according to the conditions, with VCs ensuring auditability and security at every step. Of particular note is the "A2A x402" extension, a technical component developed by Google specifically for crypto payments, developed in collaboration with Coinbase and the Ethereum Foundation. This extension enables AI Agents to seamlessly process stablecoins, ETH, and other blockchain assets, supporting native payment scenarios within the Web3 ecosystem. What kind of imagination space can AP2 bring? After analyzing the technical principles, do you think that's it? Yes, in fact, the AP2 is boring when it is disassembled alone. Its real charm lies in connecting and opening up the "MCP+A2A+AP2" technology stack, completely opening up the complete link of AI Agent's autonomous analysis+execution+payment. From now on, AI Agents can open up many application scenarios. For example, AI Agents for stock investment and financial management can help us monitor the market 24/7 and conduct independent transactions. Enterprise procurement AI Agents can automatically replenish and renew without human intervention. AP2's complementary payment capabilities will further expand the penetration of the Agent-to-Agent economy into more scenarios. Google obviously understands that after the technical framework is established, the ecological implementation must be relied upon, so it has brought in more than 60 partners to develop it, almost covering the entire payment and business ecosystem. Interestingly, it also involves major Crypto players such as Ethereum, Coinbase, MetaMask, and Sui. Combined with the current trend of currency and stock integration, the imagination space has been doubled. Is web3 AI really dead? Not entirely. Google's AP2 looks complete, but it only achieves technical compatibility with Crypto payments. It can only be regarded as an extension of the traditional authorization framework and belongs to the category of automated execution. There is a "paradigm" difference between it and the autonomous asset management pursued by pure Crypto native solutions. The Crypto-native solutions under exploration are taking the "decentralized custody + on-chain verification" route, including AI Agent autonomous asset management, AI Agent autonomous transactions (DeFAI), AI Agent digital identity and on-chain reputation system (ERC-8004...), AI Agent on-chain governance DAO framework, AI Agent NPC and digital avatars, and many other interesting and fun directions. Ultimately, once users get used to AI Agent payments in traditional fields, their acceptance of AI Agents autonomously owning digital assets will also increase. And for those scenarios that AP2 cannot reach, such as anonymous transactions, censorship-resistant payments, and decentralized asset management, there will always be a time for crypto-native solutions to show their strength? The two are more likely to be complementary rather than competitive, but to be honest, the key technological advancements behind AI Agents currently all come from web2AI, and web3AI still needs to keep up the good work!
Share
PANews2025/09/18 07:00