Background
In interim proceedings, the Cologne Higher Regional Court had to decide on the announced processing of publicly shared personal data from Facebook and Instagram users for the training of AI systems. The case was triggered by the announcement that, from 27 May 2025, publicly posted data of adult users as well as interactions with Meta’s AI model would be used to develop and improve its AI systems. This was to include profile pictures, public comments, ratings, reviews, avatars, photos, videos, audio and related metadata.
A consumer protection association sought to prohibit Meta from carrying out this processing by way of a preliminary injunction. At the core of the dispute was whether the use of this data complies with the GDPR and the Digital Markets Act.
Court’s decision
The 15th Civil Senate rejected the application for a preliminary injunction. Based on the summary examination required in interim proceedings, the court found no claim for injunctive relief.
The court considered the intended data processing likely lawful. A key point was that, in the Senate’s view, Meta may rely on a legitimate interest under Article 6(1), first sentence, point (f) GDPR. The Cologne Higher Regional Court emphasised that training generative AI with large volumes of data may constitute a concrete and current interest and that, in interim proceedings, no equally suitable and less intrusive alternatives had been sufficiently established.
GDPR, DMA and special categories of data
The court also ultimately rejected a violation of Article 5(2) DMA. In the specific case, incorporating data from Facebook and Instagram into a single AI training dataset was not regarded as “combining” in the legal sense, because there was no targeted linking of personal data of the same person across different core platform services.
The court further clarified that the GDPR is not displaced by the AI Act or the DMA; rather, these regimes apply in parallel. In the Senate’s view, Article 9(1) GDPR also did not ultimately preclude the processing, even though the training dataset could contain special categories of personal data. In this context, the court referred, among other things, to the specific features of AI training with mass data and to a narrow interpretation of the prohibition in the procedural context at hand.
Reasonable expectations, safeguards and practical relevance
For the balancing exercise, it was important that Meta had provided for various safeguards, including de-identification measures, technical and organisational protections, and options to object to the use or to change the public visibility of content. According to the Senate, these measures could reduce the intensity of the interference.
It is also noteworthy that the court considered the use for AI training purposes reasonably foreseeable, at least for data posted to the relevant services from 26 June 2024 onwards. While the Senate did not readily assume such foreseeability for older data, it nevertheless concluded in the interim balancing exercise that the data subjects’ interests did not prevail.
For businesses, the decision is particularly relevant because it shows that courts do not automatically classify AI training with publicly accessible user data as unlawful. At the same time, the judgment makes clear that transparency, objection mechanisms, data minimisation and technical safeguards play a central role in the legal assessment. The handling of third-party data, institutional accounts and content revealing special categories of personal data remains especially sensitive.
To the point
- The Cologne Higher Regional Court rejected the interim application against Meta’s announced AI training with publicly shared Facebook and Instagram data.
- In its summary assessment, the processing may in principle rely on Article 6(1)(f) GDPR.
- In this case, the court did not classify the incorporation of Facebook and Instagram data into an AI training dataset as unlawful “combining” under Article 5(2) DMA.
- Safeguards such as de-identification, transparency and objection mechanisms were key to the balancing exercise.
- For businesses and platforms, the ruling shows that AI training with public data may be legally possible, but only with robust governance and a sound data protection architecture.