Introduction: From Rankings to AI Answer Inclusion As AI answer engines such as ChatGPT, Perplexity, Microsoft Copilot, and Google AI Mode increasingly mediate Introduction: From Rankings to AI Answer Inclusion As AI answer engines such as ChatGPT, Perplexity, Microsoft Copilot, and Google AI Mode increasingly mediate

A Live, Time-Separated Case Study Demonstrating Persistent AI Visibility Across Major AI Answer Engines

Introduction: From Rankings to AI Answer Inclusion

As AI answer engines such as ChatGPT, Perplexity, Microsoft Copilot, and Google AI Mode increasingly mediate how users discover information, visibility is no longer defined solely by ranked search results. It is defined by whether an AI system selects a source for direct inclusion within its generated answer — and the position at which that source appears.

This shift has driven growing interest in Generative Engine Optimisation (GEO): the practice of structuring content, entities, and trust signals so that AI systems recognise a source as authoritative, reliable, and suitable for citation.

While GEO is now widely discussed, there remains limited publicly available evidence demonstrating repeatable AI source positioning over time, measured using live retrieval rather than simulations or retrospective tooling.

This article documents a live, time-separated case study, designed and executed by Paul Rowe, Chief Generative Engine Optimisation Officer & CEO of NeuralAdX Ltd, examining how major AI answer engines surfaced sources for the same commercial query across a three-month interval.

The full live recordings and transcripts referenced in this case study are publicly available for independent verification:
https://neuraladx.com/proof-that-generative-engine-optimisation-works-video/

What Generative Engine Optimisation Actually Changes

Traditional SEO focuses on influencing where a page appears within a ranked list of links.
Generative Engine Optimisation focuses on whether a source is selected and positioned within an AI-generated answer.

In practice, GEO aims to ensure that AI systems:

  • recognise a source as a clearly defined topical authority
  • can reliably extract answer-ready information
  • continue to trust the source across repeated evaluation cycles

The objective is not simply visibility, but answer eligibility — becoming a source AI systems repeatedly choose when generating responses.

Methodology: Controlled Live Retrieval Testing

To ensure the results reflected genuine AI behaviour rather than isolated anomalies, the tests were designed using a controlled, repeatable methodology.

The following constraints were applied:

  • one website and one entity
  • one unchanged query: “What is the cost of generative engine optimisation in the UK?”
  • one unchanged content and entity structure
  • live AI retrieval captured via continuous screen recording
  • transcripts enabled to verify real-time output

Two live retrieval events were recorded:

  • 19th September 2025 — initial retrieval
  • 10th December 2025 — follow-up retrieval after three months

No optimisation changes were introduced between tests. This allowed the results to reflect how AI systems independently reassessed the same source over time.

Test 1 Results — September (Initial Live Retrieval)

During the September test, four AI systems were observed. Results are presented in a fixed, consistent order.

ChatGPT — September (#1)

[IMAGE TO BE INSERTED]
Caption: ChatGPT live response surfacing NeuralAdX Ltd as the #1 referenced source for the query “What is the cost of generative engine optimisation in the UK?” (19 September).

Perplexity — September (#1)

[IMAGE TO BE INSERTED]
Caption: Perplexity response surfacing NeuralAdX Ltd as the #1 referenced source, listed first within its sources panel for the same query (19 September).

Microsoft Copilot — September (#1)

[IMAGE TO BE INSERTED]
Caption: Microsoft Copilot generative response surfacing NeuralAdX Ltd as the #1 referenced source for the same query (19 September).

Google AI Mode — September (#3)

[IMAGE TO BE INSERTED]
Caption: Google AI Mode response surfacing NeuralAdX Ltd as the #3 referenced source within the AI-generated answer for the same query (19 September).

Observation (September):

  • ChatGPT: #1 referenced source
  • Perplexity: #1 referenced source
  • Microsoft Copilot: #1 referenced source
  • Google AI Mode: #3 referenced source

Test 2 Results — December (Three-Month Follow-Up)

The same query was tested again on 10th December 2025, using the same methodology and unchanged content.

ChatGPT — December (#1)

[IMAGE TO BE INSERTED]
Caption: ChatGPT follow-up response surfacing NeuralAdX Ltd as the #1 referenced source for the same query (10 December).

Perplexity — December (#1)

[IMAGE TO BE INSERTED]
Caption: Perplexity follow-up response surfacing NeuralAdX Ltd as the #1 referenced source, listed first within its sources panel for the same query (10 December).

Microsoft Copilot — December (did not surface)

No image available
During the December test, Microsoft Copilot did not surface a generated answer for this query under the recorded conditions.

Google AI Mode — December (#3)

[IMAGE TO BE INSERTED]
Caption: Google AI Mode follow-up response surfacing NeuralAdX Ltd as the #3 referenced source within the AI-generated answer for the same query (10 December).

Observation (December):

  • ChatGPT: #1 referenced source
  • Perplexity: #1 referenced source
  • Microsoft Copilot: did not surface
  • Google AI Mode: #3 referenced source

Why Platform Differences Matter

AI systems do not apply uniform retrieval or synthesis logic. Each platform applies different thresholds, intent classifiers, and answer-generation rules.

This case study demonstrates that:

  • Eligibility can persist even when prominence differs (Google AI Mode at #3 across both tests)
  • Surfacing behaviour can change over time without implying content regression (Copilot surfacing in September but not December)

This reinforces a core GEO principle:

AI visibility must be evaluated per platform and per time period using live retrieval evidence.

Conclusion

This case study does not attempt to generalise results across all AI systems or all query conditions.

It does demonstrate — using live, time-separated evidence — that when content structure, entity clarity, and trust signals align with AI evaluation mechanisms, persistent AI visibility and inclusion are achievable, even where prominence differs by platform.

Author

Paul Rowe is Chief Generative Engine Optimisation Officer & CEO at NeuralAdX Ltd, specialising in evidence-based strategies designed to achieve measurable AI answer-engine visibility.

Market Opportunity
Major Logo
Major Price(MAJOR)
$0,11571
$0,11571$0,11571
-%5,24
USD
Major (MAJOR) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.