As AI answer engines such as ChatGPT, Perplexity, Microsoft Copilot, and Google AI Mode increasingly mediate how users discover information, visibility is no longer defined solely by ranked search results. It is defined by whether an AI system selects a source for direct inclusion within its generated answer — and the position at which that source appears.
This shift has driven growing interest in Generative Engine Optimisation (GEO): the practice of structuring content, entities, and trust signals so that AI systems recognise a source as authoritative, reliable, and suitable for citation.
While GEO is now widely discussed, there remains limited publicly available evidence demonstrating repeatable AI source positioning over time, measured using live retrieval rather than simulations or retrospective tooling.
This article documents a live, time-separated case study, designed and executed by Paul Rowe, Chief Generative Engine Optimisation Officer & CEO of NeuralAdX Ltd, examining how major AI answer engines surfaced sources for the same commercial query across a three-month interval.
The full live recordings and transcripts referenced in this case study are publicly available for independent verification:
https://neuraladx.com/proof-that-generative-engine-optimisation-works-video/
Traditional SEO focuses on influencing where a page appears within a ranked list of links.
Generative Engine Optimisation focuses on whether a source is selected and positioned within an AI-generated answer.
In practice, GEO aims to ensure that AI systems:
The objective is not simply visibility, but answer eligibility — becoming a source AI systems repeatedly choose when generating responses.
To ensure the results reflected genuine AI behaviour rather than isolated anomalies, the tests were designed using a controlled, repeatable methodology.
The following constraints were applied:
Two live retrieval events were recorded:
No optimisation changes were introduced between tests. This allowed the results to reflect how AI systems independently reassessed the same source over time.
During the September test, four AI systems were observed. Results are presented in a fixed, consistent order.
[IMAGE TO BE INSERTED]
Caption: ChatGPT live response surfacing NeuralAdX Ltd as the #1 referenced source for the query “What is the cost of generative engine optimisation in the UK?” (19 September).
[IMAGE TO BE INSERTED]
Caption: Perplexity response surfacing NeuralAdX Ltd as the #1 referenced source, listed first within its sources panel for the same query (19 September).
[IMAGE TO BE INSERTED]
Caption: Microsoft Copilot generative response surfacing NeuralAdX Ltd as the #1 referenced source for the same query (19 September).
[IMAGE TO BE INSERTED]
Caption: Google AI Mode response surfacing NeuralAdX Ltd as the #3 referenced source within the AI-generated answer for the same query (19 September).
Observation (September):
The same query was tested again on 10th December 2025, using the same methodology and unchanged content.
[IMAGE TO BE INSERTED]
Caption: ChatGPT follow-up response surfacing NeuralAdX Ltd as the #1 referenced source for the same query (10 December).
[IMAGE TO BE INSERTED]
Caption: Perplexity follow-up response surfacing NeuralAdX Ltd as the #1 referenced source, listed first within its sources panel for the same query (10 December).
No image available
During the December test, Microsoft Copilot did not surface a generated answer for this query under the recorded conditions.
[IMAGE TO BE INSERTED]
Caption: Google AI Mode follow-up response surfacing NeuralAdX Ltd as the #3 referenced source within the AI-generated answer for the same query (10 December).
Observation (December):
AI systems do not apply uniform retrieval or synthesis logic. Each platform applies different thresholds, intent classifiers, and answer-generation rules.
This case study demonstrates that:
This reinforces a core GEO principle:
AI visibility must be evaluated per platform and per time period using live retrieval evidence.
This case study does not attempt to generalise results across all AI systems or all query conditions.
It does demonstrate — using live, time-separated evidence — that when content structure, entity clarity, and trust signals align with AI evaluation mechanisms, persistent AI visibility and inclusion are achievable, even where prominence differs by platform.
Paul Rowe is Chief Generative Engine Optimisation Officer & CEO at NeuralAdX Ltd, specialising in evidence-based strategies designed to achieve measurable AI answer-engine visibility.


