54% of software defects in production are caused by human error during testing. Traditional RPA (Robotic Process Automation) is brittle. It breaks when the UI changes54% of software defects in production are caused by human error during testing. Traditional RPA (Robotic Process Automation) is brittle. It breaks when the UI changes

Building a Self-Healing Web Tester with AI Agents and Combinatorial Logic

2026/01/07 12:14
5분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

54% of software defects in production are caused by human error during testing.

If you are a QA Engineer or a Full Stack Developer, you know the pain of Web UI Testing. You spend days writing Selenium or Playwright scripts, targeting specific div IDs and XPath selectors. Then, a frontend developer changes a CSS class, and your entire test suite turns red.

Traditional RPA (Robotic Process Automation) is brittle. It breaks when the UI changes. It’s strictly rule-based.

In this engineering guide, based on research from Fujitsu’s Social Infrastructure Division, we are going to build a "Next-Gen" Testing Pipeline. We will move away from brittle scripts and move toward Autonomous AI Agents.

We will combine Combinatorial Parameter Generation (to ensure we test every edge case) with AI Agents (using tools like browser-use) that "see" the website like a human, making your tests immune to UI changes.

The Architecture: The Agentic Test Loop

We are building a system that doesn't just "click coordinates"; it understands intent.

The Pipeline:

  1. Source Analysis: Extract parameters from the code/specifications.

  2. Combinatorial Engine: Generate the minimum set of test cases to cover all logic paths.

  3. The Agent: An LLM-driven browser controller that executes the test.

  4. The Judge: An AI validator that checks if the output matches the expectation.

Phase 1: The Combinatorial Engine (Smart Pattern Generation)

A common mistake in testing is testing everything (too slow) or testing randomly (misses bugs). The research suggests analyzing source code to generate an Exhaustive Parameter Table.

We need to cover the "All-Pairs" (pairwise) combinations of settings to catch interaction bugs.

**The Logic: \ If you have 3 settings:

  • Theme: [Dark, Light]
  • Notifications: [Email, SMS, Push]
  • Role: [Admin, User]

Testing every combination = 2×3×2=122×3×2=12 tests. \n Pairwise testing can reduce this to ~6 tests while catching 90%+ of defects.

**Python Implementation: \ We can use the allpairspy library to generate this matrix automatically.

from allpairspy import AllPairs # Parameters extracted from the Web UI Source Code parameters = [ ["Dark", "Light"], ["Email", "SMS", "Push"], ["Admin", "User"] ] print("PAIRWISE TEST CASES:") for i, pairs in enumerate(AllPairs(parameters)): print(f"Case {i}: Theme={pairs[0]}, Notify={pairs[1]}, Role={pairs[2]}") # Output: # Case 0: Theme=Dark, Notify=Email, Role=Admin # Case 1: Theme=Light, Notify=SMS, Role=Admin # ... (Optimized list)

Phase 2: The AI Agent (Without Selenium)

This is the game-changer. Instead of writing driver.find_element(By.ID, "submit-btn").click(), we give an AI agent a high-level instruction.

The research highlights the use of "Browser Use," an emerging class of AI agents that control headless browsers.

Why this works:

  • If the "Submit" button changes from  to 
    , Selenium fails.
  • The AI Agent sees a visual element labeled "Submit" and clicks it, regardless of the underlying HTML.

The Implementation

We will use Python with a library like langchain and playwright (simulating the browser-use concept) to build an agent that accepts the parameters from Phase 1.

from langchain.chat_models import ChatOpenAI from browser_use import Agent import asyncio async def run_ai_test(theme, notify, role): # 1. Construct the Natural Language Instruction instruction = f""" Go to 'http://localhost:3000/settings'. Log in as a '{role}'. Change the Theme to '{theme}'. Set Notifications to '{notify}'. Click 'Save'. Verify that the 'Success' toast message appears. """ # 2. Initialize the Agent agent = Agent( task=instruction, llm=ChatOpenAI(model="gpt-4-vision-preview"), ) # 3. Execute history = await agent.run() # 4. Return result return history.is_successful() # Run a test case from Phase 1 asyncio.run(run_ai_test("Dark", "SMS", "Admin"))

Phase 3: The "Past Failure" Feedback Loop (RAG)

The paper notes that 54% of defects are human error—often repeating past mistakes. To fix this, we inject "Past Failure Knowledge" into the Agent.

We create a lightweight RAG (Retrieval-Augmented Generation) system. Before generating the test plan, the system checks a vector database of previous bug reports.

The Workflow:

  1. Ingest: Index old Jira tickets/Bug reports into a Vector DB.
  2. Retrieve: When testing the "Settings Page," retrieve bugs related to "Settings."
  3. Inject: Add a constraint to the Agent's prompt.

Modified Prompt Logic:

# Retrieved Context: "Bug #402: Saving settings fails when username contains emoji." enhanced_instruction = f""" {base_instruction} IMPORTANT: Based on past failure #402, please also test changing the username to 'User😊' before saving to ensure the app does not crash. """

The ROI: Why Switch?

The research indicates massive efficiency gains from this approach.

  1. Night/Weekend Testing: Unlike humans, AI Agents don't need sleep. You can run 10,000 permutations overnight.
  2. Cost Reduction: The study projects a 0.5 man-month reduction per project cycle.
  3. Zero Maintenance: When the UI changes, you don't rewrite scripts. The AI adapts.

Security & Ethics Warning

While powerful, AI Agents executing web actions carry risks:

  • Data Leakage: Be careful sending proprietary specs or PII to public LLMs (OpenAI/Anthropic). Use Azure OpenAI or local models (Llama 3) for enterprise data.
  • Runaway Agents: Always implement a "Human-in-the-Loop" or a hard timeout to prevent the agent from clicking "Delete Database" if it gets confused.

Conclusion

The days of writing brittle XPath selectors are numbered. By combining Combinatorial Logic (to determine what to test) with AI Agents (to determine how to test), we can build a testing pipeline that heals itself.

**Your Next Step: \ Don't rewrite your Selenium suite yet. Start by picking one flaky test flow. Replace it with a browser-use agent and see if it survives the next UI update. \n

\

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0,01936
$0,01936$0,01936
-1,52%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!