A comprehensive study by digital intelligence platform Search Atlas has found no evidence that six major artificial intelligence platforms leak sensitive user data, addressing a primary concern for businesses and individuals using these tools. The research, which evaluated OpenAI, Gemini, Perplexity, Grok, Copilot, and Google AI Mode through controlled experiments designed to replicate worst-case data exposure scenarios, discovered a complete absence of data leakage concerning user-provided confidential information.
The study’s methodology involved introducing unique, non-public facts to each model through direct prompts and simulated web search results, then testing whether these facts could be retrieved in subsequent interactions. Across all platforms, researchers found zero instances where models reproduced the private information they had been exposed to, indicating that sensitive data acts more like temporary working memory rather than being absorbed into lasting memory that could be revealed to other users. The complete study can be accessed at https://searchatlas.com.
One experiment specifically tested whether AI models would retain information retrieved via live web search after search access was disabled. Researchers selected a real-world event that occurred after all models’ training cutoff dates, ensuring any correct answers could only come from live retrieval. While models answered most questions correctly with search enabled, those correct answers largely disappeared once search was immediately disabled, demonstrating no evidence of short-term retention or leakage of retrieved information.
The research revealed a crucial distinction between two different failure modes in AI systems: data leakage versus hallucination. While none of the platforms showed evidence of leaking private information between users, several exhibited significant hallucination rates, generating confident but incorrect answers when lacking reliable information. Models from Gemini, Copilot, and Google AI Mode showed higher tendencies toward hallucination, while OpenAI and Perplexity demonstrated lower hallucination rates. This distinction is significant for risk assessment, as the prevalent concern about AI systems exposing one user’s sensitive information to another was not supported by evidence.
For businesses and privacy-conscious users, these findings provide reassurance that proprietary strategies or private details shared during a single AI session are not absorbed into lasting memory that could be revealed to other users. However, the persistence of hallucination underscores the continued need for human verification of AI-generated responses, particularly in contexts where accuracy is critical. The study also highlights the importance of retrieval-based systems like Retrieval-Augmented Generation (RAG) for ensuring accurate responses about current events or proprietary information.
‘Much of the concern surrounding enterprise AI adoption stems from a reasonable but untested assumption that if you input sensitive information into one of these systems, it will somehow be leaked,’ stated Manick Bhan, Founder of Search Atlas. ‘Our goal was to rigorously test that assumption under controlled conditions rather than speculate. Across every platform we assessed, the data did not support it.’ The research suggests that while AI platforms present genuine risks related to accuracy and hallucination, the specific fear of data leakage between users appears unfounded based on current evidence.
This news story relied on content distributed by Press Services. Blockchain Registration, Verification & Enhancement provided by NewsRamp
. The source URL for this press release is Study Finds No Evidence of Data Leakage Across Six Major AI Platforms.
The post Study Finds No Evidence of Data Leakage Across Six Major AI Platforms appeared first on citybuzz.


