Social media giant X, formerly known as Twitter, is under renewed fire for alleged unauthorised siphoning of personal data from over 60 million European Union users to train its artificial intelligence (AI) systems.
The platform began processing user data without seeking permission, leading to a fresh wave of privacy complaints across multiple European countries.
The issue came to light when a vigilant user noticed a new setting that revealed X had quietly started using post data from EU users for its Grok AI chatbot. This discovery drew immediate concern from the Irish Data Protection Commission (DPC), the main body responsible for overseeing X’s compliance with the General Data Protection Regulation (GDPR).
The Irish DPC quickly initiated legal proceedings against X, aiming to halt the processing of unauthorised data. However, privacy advocates, including the non-profit organisation noyb, have condemned the DPC’s response as insufficient.
Noyb, led by privacy activist Max Schrems, has lodged complaints in nine countries, arguing that X’s actions violate several GDPR provisions. These complaints focus on the lack of transparency and consent in X’s data handling practices.
This situation has brought back the issue of personal data protection in the EU. Under the GDPR, companies are required to have a valid legal basis for processing personal data, typically through user consent. However, X has attempted to justify its actions under the “legitimate interest” clause — a defence that has already been dismissed by the European Court of Justice in similar cases involving other tech giants.
Despite this, X continued its data processing until early August 2024, without adequately informing users or offering them a chance to opt out. A setting allowing users to block data processing was only added in late July, long after the data had been ingested into the AI system.
Max Schrems and other privacy advocates are calling for more strict enforcement of GDPR regulations, noting that companies must obtain consent before using personal data for AI training or any other purposes. They argue that the current situation brings out the need for stronger oversight to prevent companies from bypassing user rights.