Conversations with Grok, the chatbot developed by Elon Musk’s artificial intelligence company xAI, are appearing on Google in their hundreds of thousands.
A simple Google search reveals user chats that were never meant to be public, ranging from harmless writing prompts to disturbing exchanges about drugs, suicide, and bomb-making.
The leak comes from Grok’s “share” button. When users clicked it, they were given a unique link to send their chats by email, text, or social media.
What most did not know is that these links were also published on Grok’s website, where they became visible to search engines including Google, Bing, and DuckDuckGo. Forbes found more than 370,000 of such conversations indexed and freely accessible.
Some of the exposed content is relatively ordinary, people asking the bot to draft tweets, summarise news, or generate business ideas. British journalist Andrew Clifford used it to create summaries for his website Sentinel Current.
He told Forbes: “I would be a bit peeved but there was nothing on there that shouldn’t be there.” But alongside these harmless tasks are conversations no company would want to be linked to.
Among the results are chats in which Grok reportedly explained how to manufacture fentanyl, listed suicide methods, offered malware code, and even laid out a plan for the assassination of Elon Musk himself.
Users also uploaded spreadsheets, documents, and images, all of which became searchable once shared. Some of the material contained names, passwords, and personal medical information.
The discovery damages xAI’s earlier public stand. In July, after ChatGPT users complained of a similar issue, Grok’s official account stated it had “no such sharing feature” and claimed it would “prioritize[s] privacy.” Musk reinforced this with a post on X that read: “Grok ftw.” The current evidence shows otherwise.
Professionals were not spared either as Nathan Lambert, a computational scientist at the Allen Institute for AI, used Grok to summarise his blog posts. He later learned those chats were visible online: “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” he told Forbes.
Google says website owners have the power to block such indexing if they choose. “Publishers of these pages have full control over whether they are indexed,” a company spokesperson explained. The statement puts responsibility squarely back on xAI, which has not responded to repeated requests for comment.
Meanwhile, opportunists are taking advantage of the exposure. SEO specialists on LinkedIn and underground forums like BlackHatWorld have begun experimenting with Grok’s share links to manipulate search rankings.
Satish Kumar, CEO of Pyrite Technologies, verified to Forbes how companies were already pushing dissertation-writing services into Google results through Grok. “Every shared chat on Grok is fully indexable and searchable on Google,” he said.
This issue places Grok alongside other AI platforms that have struggled with the visibility of shared conversations. OpenAI briefly allowed shared ChatGPT conversations to appear on Google before reversing course and calling it a “short-lived experiment.”
Google’s own chatbot Bard stopped indexing chats in 2023, while Meta continues to allow its AI interactions to be found online.
For Musk and xAI, the fallout is a gap between public assurances and the reality of user data being left exposed, leaving worries about transparency, user trust, and whether the company took adequate steps to protect people from accidental disclosure.