Search Atlas study reveals AI platforms show zero data leakage. Research on 6 major LLMs finds no evidence of sensitive information exposure, addressing key privacySearch Atlas study reveals AI platforms show zero data leakage. Research on 6 major LLMs finds no evidence of sensitive information exposure, addressing key privacy

Study Finds No Evidence of Data Leakage Across Six Major AI Platforms

2026/03/25 19:10
3 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

A comprehensive study by digital intelligence platform Search Atlas has found no evidence that six major artificial intelligence platforms leak sensitive user data, addressing a primary concern for businesses and individuals using these tools. The research, which evaluated OpenAI, Gemini, Perplexity, Grok, Copilot, and Google AI Mode through controlled experiments designed to replicate worst-case data exposure scenarios, discovered a complete absence of data leakage concerning user-provided confidential information.

The study’s methodology involved introducing unique, non-public facts to each model through direct prompts and simulated web search results, then testing whether these facts could be retrieved in subsequent interactions. Across all platforms, researchers found zero instances where models reproduced the private information they had been exposed to, indicating that sensitive data acts more like temporary working memory rather than being absorbed into lasting memory that could be revealed to other users. The complete study can be accessed at https://searchatlas.com.

One experiment specifically tested whether AI models would retain information retrieved via live web search after search access was disabled. Researchers selected a real-world event that occurred after all models’ training cutoff dates, ensuring any correct answers could only come from live retrieval. While models answered most questions correctly with search enabled, those correct answers largely disappeared once search was immediately disabled, demonstrating no evidence of short-term retention or leakage of retrieved information.

The research revealed a crucial distinction between two different failure modes in AI systems: data leakage versus hallucination. While none of the platforms showed evidence of leaking private information between users, several exhibited significant hallucination rates, generating confident but incorrect answers when lacking reliable information. Models from Gemini, Copilot, and Google AI Mode showed higher tendencies toward hallucination, while OpenAI and Perplexity demonstrated lower hallucination rates. This distinction is significant for risk assessment, as the prevalent concern about AI systems exposing one user’s sensitive information to another was not supported by evidence.

For businesses and privacy-conscious users, these findings provide reassurance that proprietary strategies or private details shared during a single AI session are not absorbed into lasting memory that could be revealed to other users. However, the persistence of hallucination underscores the continued need for human verification of AI-generated responses, particularly in contexts where accuracy is critical. The study also highlights the importance of retrieval-based systems like Retrieval-Augmented Generation (RAG) for ensuring accurate responses about current events or proprietary information.

‘Much of the concern surrounding enterprise AI adoption stems from a reasonable but untested assumption that if you input sensitive information into one of these systems, it will somehow be leaked,’ stated Manick Bhan, Founder of Search Atlas. ‘Our goal was to rigorously test that assumption under controlled conditions rather than speculate. Across every platform we assessed, the data did not support it.’ The research suggests that while AI platforms present genuine risks related to accuracy and hallucination, the specific fear of data leakage between users appears unfounded based on current evidence.

Blockchain Registration, Verification & Enhancement provided by NewsRamp™

This news story relied on content distributed by Press Services. Blockchain Registration, Verification & Enhancement provided by NewsRamp™. The source URL for this press release is Study Finds No Evidence of Data Leakage Across Six Major AI Platforms.

The post Study Finds No Evidence of Data Leakage Across Six Major AI Platforms appeared first on citybuzz.

Market Opportunity
Major Logo
Major Price(MAJOR)
$0.06522
$0.06522$0.06522
+0.29%
USD
Major (MAJOR) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.