The system has numerous issues, such as failure to record successful actions, complex plans that confuse the system in simple scenarios, lack of full automation, the need for interactive user interaction, and others.The system has numerous issues, such as failure to record successful actions, complex plans that confuse the system in simple scenarios, lack of full automation, the need for interactive user interaction, and others.

Analysis of the ROGUE Agent-Based Automated Web Testing System

In this series of articles about agents in cybersecurity, specifically pentesting, I will examine and describe various online projects, explain their operating principles, test their functionality, verify the quality of the results, and summarize. In parallel, we will develop our own pentesting agent from start to finish, testing various language models, both for local use and those accessible only via the API. We will also train our own large language model for pentesting.

In this article, I want to analyze a simple project. https://github.com/faizann24/rogue

\n

1. Technology set

There is no release number on repository that why I mark that we use version from latest commit from Jun 18 2025.

So, judging by the project description, it is positioned as an agent for AUTOMATIC WEB scanning, most likely targeting web applications, not the web as a general concept.

It is written in Python. OpenAI is used as a large language model; other models cannot be used because OpenAI usage is hardcoded in the llm.py file. The gpt-4o and o4-mini models are used. As can be seen from the code in this file, there was an attempt to use Anthropic models, but it never went beyond the class declaration. The entire web application system is built using the Playwright testing framework.

Now, regarding the agent features used by this project, these include tools, a system prompt describing the agent's role, and a knowledge base. Let's describe each of these in more detail. The tools are located in the tools.py file. Part of the tools is a call to Playwright functionality, a method for running Python code generated by the language model, as well as unfinished functions and a method that attempts to obtain output from the tools used.

The system prompts vary depending on the steps. The llm system prompt describes it as a security check agent, adds some knowledge about SQL injection and other hacking hints, describes Playwright functions, and describes their output.

Checks for subdomains or passwords are done in the most stupidly simple way possible, simply checking data from a custom collection in the list directory. \n

The description claims that the system operates automatically, but this is not true. The system has a tool that requests information from the user, and not only is it unknown when the system will stop and wait for a response, but the function itself is simply a stub that doesn't use the entered data.

\n

\n

2.Description of the system operation

\n A general description of the application's workflow is shown in the diagram. Briefly, when run.py is launched, an Agent object is created. When it is initialized, a simple knowledge base is created based on data from third-party websites. In the diagram, these are pentestmonkey websites, but there are others, as can be seen in the knowledge_fetcher.py file. This data is passed to the LLM object, the scheduler is initialized with its system prompt (I've highlighted it in red), and finally, the Reporter is initialized.

Based on the parsed page, self.scanner.scan(url) generates a page for analysis through Summarizer with its system prompt. This involves two stages: simple parsing and final parsing through LLM, where it attempts to find the required elements.

Based on the findings, a scan plan is generated and executed until it has completed all the plan's steps. The system analyzes, uses tools and their outputs, and creates an intermediate summary using its system prompt.

At the end, a final report is generated with its own system prompt.

\n

3.Practical testing

3.1 Testing Environment

For this study, I used the well-known Metasploitable 2 project, which includes mutillidae, a web application for testing web vulnerabilities. I chose the simplest XSS vulnerability. For this, I booted a machine on my closed network and selected the following page for testing. http://192.168.127.166/mutillidae/index.php?page=add-to-your-blog.php

the simplest vulnerability is at work here <script>alert("a")</script>

\n

3.2. Launch technique

Run with command

python run.py -u http://192.168.127.166/mutillidae/index.php?page=add-to-your-blog.php

by the default LLM model is o4-mini \n

3.3. Observation and analysis of the process

We see that the system has selected the o4-mini model, and the data from the RAG is disabled.

The system saw that JS was triggered and realized it needed to run an XSS check, but it immediately started doing complex checks and going down the wrong path. There were attempts to do Stored XSS Injection via Client-Side Filter Evasion but absolutely illiterate with simple vulnerability, which led to failure fill("textarea[name='content']", "")

System tried Stored XSS Injection via Alternative Payload Encoding

During the scan, for some reason, it jumped to another page.

It took 10-15 minutes to scan one page, but if you need to scan 20-30 pages, that's hours per site. In real life, you're given a client network with several web services and a bunch of other services that need to be scanned within a reasonable time, and no one will wait a week.

The report was completely disappointing, even though I saw that the attack was successful when the code was executed.

\n but it was not saved and the report turned out to be empty on vulnerabilities

\n The second time the system was launched with a simple RAG

And I saw the coveted XSS \n

But the result didn't change much, and even got worse. The vulnerability that was found wasn't even tested this time at the same stage, only at the very end.

And the system asked for user actions.

This didn't help either, requiring user input. And the final report ended up empty of useful data.

\n

As tested, the system has numerous issues, such as failure to record successful actions, complex plans that confuse the system in simple scenarios, lack of full automation, the need for interactive user interaction, and others. All of these issues have been resolved in our SxipherAI system. \n

Summary

• The system is not fully automated.

• The system uses only OpenAI.

• It cannot fully test web applications; it attempts XSS, CSF, and SQL injection, but all to no avail.

• Even with the most basic testing labs, which even a novice pen tester could handle, the system failed.

• The project was abandoned without ever reaching some logical conclusion.

• There were attempts to use tools, Playwright, and some Summari, but it was useless.

\n

In upcoming lessons, we'll demonstrate how the SxipherAI system handles such websites, specifically how SxipherAI handles XSS vulnerabilities, and develop our own simple agent for checking for XSS vulnerabilities.

\n

And next up will be PentestGPT.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

The post Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny appeared on BitcoinEthereumNews.com. The cryptocurrency world is buzzing with a recent controversy surrounding a bold OpenVPP partnership claim. This week, OpenVPP (OVPP) announced what it presented as a significant collaboration with the U.S. government in the innovative field of energy tokenization. However, this claim quickly drew the sharp eye of on-chain analyst ZachXBT, who highlighted a swift and official rebuttal that has sent ripples through the digital asset community. What Sparked the OpenVPP Partnership Claim Controversy? The core of the issue revolves around OpenVPP’s assertion of a U.S. government partnership. This kind of collaboration would typically be a monumental endorsement for any private cryptocurrency project, especially given the current regulatory climate. Such a partnership could signify a new era of mainstream adoption and legitimacy for energy tokenization initiatives. OpenVPP initially claimed cooperation with the U.S. government. This alleged partnership was said to be in the domain of energy tokenization. The announcement generated considerable interest and discussion online. ZachXBT, known for his diligent on-chain investigations, was quick to flag the development. He brought attention to the fact that U.S. Securities and Exchange Commission (SEC) Commissioner Hester Peirce had directly addressed the OpenVPP partnership claim. Her response, delivered within hours, was unequivocal and starkly contradicted OpenVPP’s narrative. How Did Regulatory Authorities Respond to the OpenVPP Partnership Claim? Commissioner Hester Peirce’s statement was a crucial turning point in this unfolding story. She clearly stated that the SEC, as an agency, does not engage in partnerships with private cryptocurrency projects. This response effectively dismantled the credibility of OpenVPP’s initial announcement regarding their supposed government collaboration. Peirce’s swift clarification underscores a fundamental principle of regulatory bodies: maintaining impartiality and avoiding endorsements of private entities. Her statement serves as a vital reminder to the crypto community about the official stance of government agencies concerning private ventures. Moreover, ZachXBT’s analysis…
Share
BitcoinEthereumNews2025/09/18 02:13
Cronos (CRO) Flatlines Despite Altcoin Season, Analyst Explains Why

Cronos (CRO) Flatlines Despite Altcoin Season, Analyst Explains Why

According to crypto market analyst CoinBaron, Cronos (CRO) has underperformed during the current altcoin season, even as tokens such as Dogecoin (DOGE) and Shiba Inu (SHIB) posted double-digit gains. While most altcoins have outperformed Bitcoin (BTC) in the last 90 days, CRO has stalled after a strong rally earlier this year. The token is down […] The post Cronos (CRO) Flatlines Despite Altcoin Season, Analyst Explains Why appeared first on CoinChapter.
Share
Coinstats2025/09/18 05:02
Will XRP Price Increase In September 2025?

Will XRP Price Increase In September 2025?

Ripple XRP is a cryptocurrency that primarily focuses on building a decentralised payments network to facilitate low-cost and cross-border transactions. It’s a native digital currency of the Ripple network, which works as a blockchain called the XRP Ledger (XRPL). It utilised a shared, distributed ledger to track account balances and transactions. What Do XRP Charts Reveal? […]
Share
Tronweekly2025/09/18 00:00