Background
The Regional Court of Hamburg had to decide on an AI-generated post on the platform X that was published via the account “@grok”. The post claimed that the politically active association Campact depended to a large extent on state funding or federal funds. According to the court’s findings, that statement was false.
The affected association sought to stop further dissemination of the statement by way of a preliminary injunction. The court based its decision on the fact that the post was still accessible online and therefore constituted an ongoing infringement.
Decision of the Regional Court of Hamburg
The court affirmed a claim for injunctive relief under Sections 1004(1) sentence 2 analog and 823(1) German Civil Code in conjunction with Articles 19(3) and 2(1) German Basic Law. The key point was that the average recipient would understand the statement as an assertion of fact and that, in the specific context, it was defamatory. Since the respondent could not substantiate the statement, the court treated it as a procedurally untrue factual allegation.
Of particular relevance is the court’s finding that the legal assessment under the law of statements does not change merely because the post was created by AI. Users would not classify such statements differently for that reason alone; on the contrary, a reference to supposedly fact-based AI may even increase their persuasive effect. The operator of the account was held responsible for the content because it had adopted the statement as its own by publishing it on its account.
Practical relevance
The decision indicates that, for publicly disseminated AI content, courts do not appear willing to apply a fundamentally different liability standard than for statements written by humans. For companies, platforms, agencies and other operators of public accounts, this increases the importance of review and approval processes for AI-generated content.
This is particularly relevant where AI systems post automatically, generate replies or present content with an appearance of enhanced factual reliability. Anyone publicly disseminating such content cannot simply argue that the statement “came from the AI”. At the same time, according to the source text, the decision concerns public dissemination; it does not address the legal treatment of pure one-to-one chatbot outputs without publication.
To the point
- Publicly disseminated AI-generated statements may be assessed legally in the same way as statements written by humans.
- False and defamatory statements of fact remain unlawful even when they were generated by AI.
- Anyone operating an account and publishing AI content may be held responsible for that content.
- For companies and platforms, review, filtering and approval mechanisms for AI outputs are becoming increasingly important.
- According to the source text, it remains open whether the same standards apply to unpublished one-to-one chatbot outputs.
Source: Beck Online