Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.Most monitoring tools only tell you when something is already broken. But what if you could find issues before they become outages? I just published a deep-dive on using AIOps for proactive anomaly detection. This isn't just theory - it's a complete, hands-on tutorial with the working code you need to try it yourself. The stack: - Infrastructure: Defined with modern IAC tools: Terraform and Terragrunt - Observability: Instrumented with the OpenTelemetry - Analysis: Powered by AWS DevOps Guru.

Goodbye Manual Monitoring: How AIOps Spots Problems Before You Do

Limitation of the Traditional Monitoring

The management of modern distributed applications has become increasingly complex. Using traditional monitoring tools, which rely mainly on manual analysis, is insufficient for ensuring the availability and performance demanded by microservices or serverless topologies.

One of the main problems with traditional monitoring is the high volume and variety of telemetry data generated by IT environments. This includes metrics, logs, and traces, which in an ideal world should be consolidated on a single monitoring dashboard to allow observation of the entire system. Another problem is static thresholds for alarms. Setting them too low will generate a high volume of false positives, while setting them too high will fail to detect significant performance degradation.

To solve these problems, organizations are shifting to an intelligent, automated, and predictive solution known as AIOps. Instead of relying on human operators to manually connect the dots, AIOps platforms are designed to ingest and analyze these vast datasets in real time.

In this article, we will learn how AIOps platforms are capable of proactive anomaly detection—its most fundamental capability - as well as root cause analysis, prediction, and alert generation.


The Technology Stack

The solution detailed in this article is a combination of three synergistic pillars:

  1. A managed AIOps platform that provides analytical intelligence. We will use AWS Guru, which is the core of our solution and acts as its "AIOps brain." AWS Guru is a managed service that leverages machine learning models built and trained by AWS experts. A key design principle is to make AIOps accessible to specialists without special machine learning expertise. Its primary function is to detect operational issues or anomalies and produce high-level insights instead of a stream of raw, uncorrelated alerts. These insights include related log snippets, a detailed analysis with a possible root cause, and actionable steps to diagnose and remediate the issue.
  2. An Open-Standard observability framework that supplies high-quality telemetry data and provides a unified set of APIs, SDKs, and tools to generate, collect, and export it. The importance of OpenTelemetry lies in two principles: standardization and vendor neutrality. The benefit of using OpenTelemetry is that if we want to switch to a different AIOps tool, we can just redirect the telemetry stream.
  3. A Serverless Application that is an example of a modern and dynamic microservice topology.

The complete architectural solution for a proposed telemetry pipeline can be observed on the below diagram.

Practical Implementation

It’s important to understand that AWS Guru does not collect any telemetry data itself but is configured to monitor and continuously analyze resources produced by the Application and identified by specific tags.

To give a reader a better understanding in this section we provide a comprehensive guide on how to implement the proposed solution and further in the Experiment section we will see on how to instrument it. The following structure of a git repository aligns with IAC best practices:

. ├── demo │   ├── envs │   │   └── dev │   │   ├── env.hcl # Environment-specific configuration that sets the environment name │   │   ├── api_gateway │   │   │   └── terragrunt.hcl │   │   ├── devopsguru │   │   │   └── terragrunt.hcl │   │   ├── dynamodb │   │   │   └── terragrunt.hcl │   │   ├── iam │   │   │   └── terragrunt.hcl │   │   └── serverless_app │   │   └── terragrunt.hcl │   └── project.hcl # Project-level configuration defining `app_name_prefix` and `project_name` used across all environments ├── root.hcl # Root Terragrunt configuration that generates AWS provider blocks and configures S3 backend ├── src │   ├── app.py # Lambda handler function with OpenTelemetry instrumentation │   ├── requirements.txt │   └── collector.yaml └── terraform └── modules # Infrastructure Modules ├── api_gateway ├── devopsguru ├── dynamodb └── iam

:::info This Modular (Terragrunt) Approach has the following Benefits:

  • True environment isolation: each environment (dev, prod, etc.) has its own state, config, and outputs.
  • All major AWS resources (Lambda, API Gateway, DynamoDB, IAM, DevOps Guru) are reusable Terraform modules in terraform/modules/.
  • Easy to extend for new AWS services or environments with minimal duplication.

:::

:::tip The full repository can be found here: https://github.com/kirPoNik/aws-aiops-detection-with-guru​

:::

The Lambda function (code in app.py) receives requests from API Gateway, generates an unique ID and put an item to the Dynamo DB Table. It also contains the logic to inject a "gray failure", which will be required for our experiment, see the code snipped with the Key Logic below:

import os import time import random import boto3 import uuid # --- CONFIGURATION FOR GRAY FAILURE SIMULATION --- # This environment variable acts as our feature flag for the experiment INJECT_LATENCY = os.environ.get("INJECT_LATENCY", "false").lower() == "true" MIN_LATENCY_MS = 150 # Minimum artificial latency in milliseconds MAX_LATENCY_MS = 500 # Maximum artificial latency in milliseconds def handler(event, context): """ Handles requests and optionally injects a variable sleep to simulate performance degradation. """ # This is the core logic for our "gray failure" simulation if INJECT_LATENCY: latency_seconds = random.randint(MIN_LATENCY_MS, MAX_LATENCY_MS) / 1000.0 time.sleep(latency_seconds) # The function's primary business logic is to write an item to DynamoDB try: table.put_item( Item={ "id": str(uuid.uuid4()), "created_at": int(time.time()) } ) # ... returns a successful response ... except Exception as e: # ... returns an error response ...

and the collector configuration ( in collector.yaml), that defines pipelines to send traces to AWS X-Ray and metrics to Amazon CloudWatch, see the Key Logic below:

# This file configures the OTel Collector in the ADOT layer exporters: # Send trace data to AWS X-Ray awsxray: # Send metrics to CloudWatch using the Embedded Metric Format (EMF) awsemf: service: pipelines: # The pipeline for traces: receive data -> export to X-Ray traces: receivers: [otlp] exporters: [awsxray] # The pipeline for metrics: receive data -> export to CloudWatch metrics: receivers: [otlp] exporters: [awsemf]

Simulating Failure and Generating Insights

:::info The Experiment section

:::

Step 1: Deploy the Stack

In the demo/envs/dev directory, run the usual commands:

terragrunt init --all terragrunt plan --all terragrunt apply --all

Grab the API endpoint from the output and save it.

export API_URL=$(terragrunt output -json --all \ | jq -r 'to_entries[] | select(.key \ | test("api_endpoint")) | .value.value')

:::tip You need to enable AWS DevOps Guru and wait 15-90 minutes for Discovering applications and resources

:::

Step 2: Establish a Baseline

DevOps Guru needs to learn what "normal" looks like. Let's give it some healthy traffic. We'll use hey, a simple load testing tool perfect for this job.

Run a light load for a few hours. This gives the ML models plenty of data to build a solid baseline.

# Run for 4 hours at 5 requests per second hey -z 4h -q 5 -m POST "$API_URL"

:::tip Use GNU Screen to run this in background

:::

Step 3: Inject the Failure

Now for the fun part. We'll introduce our "gray failure" - a subtle slowdown that a simple threshold alarm would likely miss.

In demo/envs/dev/serverless_app/terragrunt.hcl, add a new INJECT_LATENCY to our Lambda function's environment variable:

environment_variables = { TABLE_NAME = dependency.dynamodb.outputs.table_name AWS_LAMBDA_EXEC_WRAPPER = "/opt/otel-instrument" OPENTELEMETRY_COLLECTOR_CONFIG_URI = "/var/task/collector.yaml" INJECT_LATENCY = "true" # <-- Change this to true }

Apply the change. This quick deployment is an important event that DevOps Guru will notice.

terragrunt apply --all

Step 4: Generate Bad Traffic

Run the same load test again. This time, every request will have that extra, variable delay.

# Run for at least an hour to generate enough bad data hey -z 1h -q 5 -m POST "$API_URL"

Our app is now performing worse than its baseline. Let's see if DevOps Guru noticed.

After 30-60 minutes of bad traffic, an "insight" popped up in the DevOps Guru console.

This is the real value of AIOps. A standard CloudWatch alarm would have just said, "Latency is high." DevOps Guru said, "Latency is high, and it started right after you deployed this change."

Conclusion

This experiment shows a clear path away from reactive firefighting. By pairing a standard observability framework like OpenTelemetry with an AIOps engine like AWS DevOps Guru, we can build systems that help us find and fix problems before they become disasters.

The big takeaway is correlation. The magic wasn't just spotting the latency spike; it was automatically linking it to the deployment. That's the jump from raw data to real insight.

The future of ops isn't about more dashboards. It's about fewer, smarter alerts that tell you what's wrong, why it's wrong, and how to fix it.

Resources

  • Github Repository: https://github.com/kirPoNik/aws-aiops-detection-with-guru
  • AWS DevOps Guru Official Page
  • OpenTelemetry Official Documentation:
  • AWS Distro for OpenTelemetry (ADOT) for Lambda
  • hey - HTTP Load Generator:

\

Market Opportunity
DeepBook Logo
DeepBook Price(DEEP)
$0.038844
$0.038844$0.038844
-0.44%
USD
DeepBook (DEEP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

The post Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now? appeared on BitcoinEthereumNews.com. On the lookout for a Sector – Tech fund? Starting with Putnam Global Technology A (PGTAX – Free Report) should not be a possibility at this time. PGTAX possesses a Zacks Mutual Fund Rank of 4 (Sell), which is based on various forecasting factors like size, cost, and past performance. Objective We note that PGTAX is a Sector – Tech option, and this area is loaded with many options. Found in a wide number of industries such as semiconductors, software, internet, and networking, tech companies are everywhere. Thus, Sector – Tech mutual funds that invest in technology let investors own a stake in a notoriously volatile sector, but with a much more diversified approach. History of fund/manager Putnam Funds is based in Canton, MA, and is the manager of PGTAX. The Putnam Global Technology A made its debut in January of 2009 and PGTAX has managed to accumulate roughly $650.01 million in assets, as of the most recently available information. The fund is currently managed by Di Yao who has been in charge of the fund since December of 2012. Performance Obviously, what investors are looking for in these funds is strong performance relative to their peers. PGTAX has a 5-year annualized total return of 14.46%, and is in the middle third among its category peers. But if you are looking for a shorter time frame, it is also worth looking at its 3-year annualized total return of 27.02%, which places it in the middle third during this time-frame. It is important to note that the product’s returns may not reflect all its expenses. Any fees not reflected would lower the returns. Total returns do not reflect the fund’s [%] sale charge. If sales charges were included, total returns would have been lower. When looking at a fund’s performance, it…
Share
BitcoinEthereumNews2025/09/18 04:05
Crypto Casino Luck.io Pays Influencers Up to $500K Monthly – But Why?

Crypto Casino Luck.io Pays Influencers Up to $500K Monthly – But Why?

Crypto casino Luck.io is reportedly paying influencers six figures a month to promote its services, a June 18 X post from popular crypto trader Jordan Fish, aka Cobie, shows. Crypto Influencers Reportedly Earning Six Figures Monthly According to a screenshot of messages between Cobie and an unidentified source embedded in the Wednesday post, the anonymous messenger confirmed that the crypto company pays influencers “around” $500,000 per month to promote the casino. They’re paying extremely well (6 fig per month) pic.twitter.com/AKRVKU9vp4 — Cobie (@cobie) June 18, 2025 However, not everyone was as convinced of the number’s accuracy. “That’s only for Faze Banks probably,” one user replied. “Other influencers are getting $20-40k per month. So, same as other online crypto casinos.” Cobie pushed back on the user’s claims by identifying the messenger as “a crypto person,” going on to state that he knew of “4 other crypto people” earning “above 200k” from Luck.io. Drake’s Massive Stake.com Deal Cobie’s post comes amid growing speculation over celebrity and influencer collaborations with crypto casinos globally. Aubrey Graham, better known as Toronto-based rapper Drake, is reported to make nearly $100 million every year from his partnership with cryptocurrency casino Stake.com. As part of his deal with the Curaçao-based digital casino, the “Nokia” rapper occasionally hosts live-stream gambling sessions for his more than 140 million Instagram followers. Founded by entrepreneurs Ed Craven and Bijan Therani in 2017, the organization allegedly raked in $2.6 billion in 2022. Stake.com has even solidified key partnerships with Alfa Romeo’s F1 team and Liverpool-based Everton Football Club. However, concerns remain over crypto casinos’ legality as a whole , given their massive accessibility and reach online. Earlier this year, Stake was slapped with litigation out of Illinois for supposedly running an illegal online casino stateside while causing “severe harm to vulnerable populations.” “Stake floods social media platforms with slick ads, influencer videos, and flashy visuals, making its games seem safe, fun, and harmless,” the lawsuit claims. “By masking its real-money gambling platform as just another “social casino,” Stake creates exactly the kind of dangerous environment that Illinois gambling laws were designed to stop.”
Share
CryptoNews2025/06/19 04:53
U.S. Banks Near Stablecoin Issuance Under FDIC Genius Act Plan

U.S. Banks Near Stablecoin Issuance Under FDIC Genius Act Plan

The post U.S. Banks Near Stablecoin Issuance Under FDIC Genius Act Plan appeared on BitcoinEthereumNews.com. U.S. banks could soon begin applying to issue payment
Share
BitcoinEthereumNews2025/12/17 02:55