The European Union’s top court has sided with a privacy challenge to Meta’s data retention policies. It ruled on Friday that social networks, such as Facebook, cannot keep using people’s information for ad targeting indefinitely.
The judgement could have major implications on the way Meta and other ad-funded social networks operate in the region.
Limits on how long personal data can be kept must be applied in order to comply with data minimization principles contained in the bloc’s General Data Protection Regulation (GDPR). Breaches of the regime can lead to fines of up to 4% of global annual turnover — which, in Meta’s case, could put it on the hook for billions more in penalties (NB: it is already at the top of the leaderboard of Big Tech GDPR breachers).
The CJEU ruling follows an earlier opinion on the case, published by a court adviser back in April, which also backed limits on the retention of personal data for ad targeting.
Contacted for a response, Meta spokesman Matt Pollard said the company is waiting to see the full judgement.
“We await the publication of the Court’s judgment and will have more to share in due course,” he told TechCrunch via email. “Meta takes privacy very seriously and has invested over five billion Euros to embed privacy at the heart of all of our products. Everyone using Facebook has access to a wide range of settings and tools that allow people to manage how we use their information.”
The adtech giant makes money by tracking and profiling users of its social networks, both on its own services and also around the web, through a network of tracking technologies including cookies, pixels and social plug-ins, in order to sell micro-targeted advertising services. So any limits on its ability to continuously profile web users in a major region for its business could hit its revenue.
Last year, Meta suggested that around 10% of its global ad revenue is generated in the EU.
The CJEU ruling follows a referral from a court in Austria where European privacy campaigner, Max Schrems, had filed a challenge to Facebook’s data collection and legal basis for advertising, among other issues.
Commenting on the win in a statement published by Schrems’ privacy rights non-profit noyb, his lawyer, Katharina Raabe-Stuppnig, wrote: “We are very pleased by the ruling, even though this result was very much expected.”
“Meta has basically been building a huge data pool on users for 20 years now, and it is growing every day. However, EU law requires ‘data minimisation’. Following this ruling only a small part of Meta’s data pool will be allowed to be used for advertising — even when users consent to ads. This ruling also applies to any other online advertisement company, that does not have stringent data deletion practices,” she added.
The original challenge to Meta’s ad business dates back to 2014 but was not fully heard in Austria until 2020, per noyb. The Austrian supreme court then referred several legal questions to the CJEU in 2021. Some were answered via a separate challenge to Meta/Facebook, in a July 2023 CJEU ruling — which struck down the company’s ability to claim a “legitimate interest” to process people’s data for ads. The remaining two questions have now been dealt with by the CJEU. And it’s more bad news for Meta’s surveillance-based ad business. Limits do apply.
Summarizing this component of the judgement in a press release, the CJEU wrote: “An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data.”
The ruling looks important on account of how ads businesses, such as Meta’s, function. Crudely put, the more of your data they can grab, the better — as far as they are concerned.
Back in 2022, an internal memo penned by Meta engineers which was obtained by Vice’s Motherboard likened its data collection practices to tipping bottles of ink into a vast lake and suggested the company’s aggregation of personal data lacked controls and did not lend itself to being able to silo different types of data or apply data retention limits.
Although Meta claimed at the time that the document “does not describe our extensive processes and controls to comply with privacy regulations.”
How exactly the adtech giant will need to amend its data retention practices following the CJEU ruling remains to be seen. But the law is clear that it must have limits. “[Advertising] companies must develop data management protocols to gradually delete unneeded data or stop using them,” noyb suggests.
The CJEU has also weighed in on a second question referred to it by the Austrian court as part of Schrems’ litigation. This concerns sensitive data that has been “manifestly made public” by the data subject, and whether sensitive characteristics could be used for ad targeting because of that.
The court ruled that it could not, maintaining the GDPR’s purpose limitation principle.
“It would have a huge chilling effect on free speech, if you would lose your right to data protection in the moment that you criticise unlawful processing of personal data in public,” wrote Raabe-Stuppnig, welcoming that “the CJEU has rejected this notion.”
Asked about Meta’s use of so-called special category data — as sensitive personal information such as sexual orientation, health data and religious views are known under the EU law — Pollard claimed the company does not process this information for ad targeting.
“We do not use special categories of data that users provide to us to personalise ads,” he wrote. “We also prohibit advertisers from sharing sensitive information in our terms and we filter out any potentially sensitive information that we’re able to detect. Further, we’ve taken steps to remove any advertiser targeting options based on topics perceived by users to be sensitive.”
This component of the CJEU ruling could have relevance beyond social media service operation per se as tech giants — including Meta — have recently been scrambling to repurpose personal data as AI training fodder. Scraping the internet is another tactic AI developers have used to grab the vast amounts of data required to train large language models and other generative AI models.
In both cases grabbing people’s data for a new purpose (AI training) could be a breach of the GDPR’s purpose limitation principle.