AI and the Law
Using AI in a legally compliant manner
Artificial intelligence is transforming businesses, communications and business models. Whether in the form of generative AI, large language models, automated processes or AI-powered tools used in marketing, sales, HR or product development, the practical benefits are substantial — and so are the legal issues.
SimonGraeser advises companies, agencies, start-ups and creatives on the legally compliant use of AI. We combine legal precision with a clear understanding of digital processes and practical, business-oriented solutions. In this way, we help our clients seize opportunities, identify risks and establish robust legal frameworks for the use of AI.
AI in business
The use of AI is already widespread. Employees use chatbots, teams generate text and images, processes are automated and data is analysed more efficiently. Legal issues arise as early as the selection of tools, the input of information and the use of AI-generated outputs, including questions of data protection, confidentiality, copyright, liability and compliance. A clear legal assessment is essential in order to avoid infringements and to establish legally sound structures within the business from the outset.
Data protection and trade secrets
The use of AI is legally sensitive, particularly where personal data, confidential information or trade secrets are involved. Not every AI tool can be used for internal or sensitive content without further scrutiny. Relevant factors include, among others, the specific use case, the role of the provider, the way in which data is processed, storage mechanisms, international data transfers and internal approval procedures.
Copyright and content
AI also raises a wide range of issues in the field of intellectual property. May third-party content be used in prompts or workflows? Who owns the rights in AI-generated texts, images or drafts? Which usage rights should be contractually agreed with agencies, service providers or software vendors? Particularly in creative and brand-related projects, a careful and coherent rights analysis is essential.
The dynamic development of the law in this area is illustrated by recent cases and court decisions. In a landmark judgment, the Munich I Regional Court (Landgericht München I), in GEMA v OpenAI Ireland Ltd., largely upheld the claims for injunctive relief, disclosure of information and damages. According to the judgment, both the memorisation of copyright-protected song lyrics in language models and their largely verbatim reproduction in outputs may be relevant under copyright law. The decision also emphasises that the court rejected reliance on the text and data mining exception and attributed responsibility not to the user, but to the operators of the models. At the time this text was prepared, the judgment was not yet final and binding (Munich I Regional Court, judgment of 11 November 2025 – 42 O 14139/24).
Further proceedings are also pending. Penguin Random House Verlagsgruppe has likewise brought proceedings against OpenAI Ireland Ltd. before the Munich I Regional Court. According to the publisher, ChatGPT reproduces recognisable content from works by Ingo Siegner in response to simple prompts and also generates illustrations of the character “Der kleine Drache Kokosnuss” that allegedly bear a strong resemblance to the original. The publisher argues that this constitutes, among other things, unauthorised reproduction, communication to the public and evidence of “memorisation” of the works within the model. These allegations reflect the claimant’s submissions in pending proceedings and do not constitute judicial findings.
These developments clearly show that businesses using generative AI or commercially exploiting AI-generated content should carefully assess copyright risks, tool selection, chains of title, internal approval processes and governance structures at an early stage. For further detail, please also see our newsletter article on the Munich I Regional Court decision (Newsletter article on the decision of the Munich I Regional Court).
Contracts, liability and internal policies
Any business that uses AI, tolerates its use by employees or offers AI-supported services should clearly define responsibilities. This includes contracts with providers, internal AI policies, approval procedures, documentation, human oversight and clear allocation of responsibilities within the organisation. Legally robust governance not only reduces risk, but also builds trust in new digital processes.
AI Act and compliance
The European AI Act introduces graduated regulatory requirements for certain AI systems and relevant actors. These include, among other things, the obligation on providers and deployers of AI systems to ensure a sufficient level of AI literacy among their employees and other persons involved in the operation and use of such systems. Businesses should therefore assess their use of AI at an early stage from a legal and organisational perspective. Transparency, documentation, internal allocation of responsibilities and staff training are all becoming increasingly important.
Please feel free to contact us — we would be pleased to advise you.
Our Services
- Legal advice on the use of AI and generative AI in business
- Review of AI use cases from a data protection, confidentiality and compliance perspective
- Drafting and review of AI policies, usage rules and governance structures
- Contract drafting and contract review in connection with AI tools, SaaS solutions and technology projects
- Advice on copyright, usage rights, trade mark issues and the protection of trade secrets in the context of AI
- Legal support for AI-powered products, platforms and digital business models
- Assistance with risk assessment, documentation and internal approval processes
- Advice in relation to cease-and-desist letters, disputes and contentious matters involving AI
FAQs
“AI and the Law” covers all legal issues arising from the development, deployment, marketing and use of AI systems. This includes, in particular, data protection, copyright, contract law, liability, unfair competition law, trade secrets and regulatory requirements.
Legal advice on AI is not only relevant for technology companies. Any business, as well as agencies, start-ups, online retailers, software companies and creatives, should review its use of AI where content is generated automatically, data is processed or business processes are supported by AI.
There is no one-size-fits-all answer. The relevant factors include the specific use case, the categories of data involved, the tool settings and the company’s internal rules. In many cases, use is possible, but using such tools without clear rules and without proper legal assessment entails significant risk.
Data protection is of central importance wherever personal data is processed. It must be assessed on what legal basis the processing takes place, what information must be provided to data subjects, whether a data processing agreement is required and how risks arising from input, storage or onward processing can be minimised.
They can be. Risks arise where prompts disclose confidential information, personal data, client-related information or trade secrets. For that reason, clear internal rules should define what may and may not be entered into AI systems.
This depends very much on the circumstances of the individual case. Relevant factors include the degree of human involvement, the contractual arrangements with service providers or tool vendors and the question whether third-party rights are affected. In commercial use cases, the chain of title must be reviewed with particular care.
Yes. A well-drafted AI policy establishes clear responsibilities, regulates permissible tools and use cases, defines review and approval procedures and helps reduce data protection, confidentiality and liability risks. It is the first step towards the long-term, legally compliant use of AI without unnecessary commercial exposure.
The decision shows that copyright risks arising from the use of generative AI must be taken very seriously. According to the judgment, both the memorisation of protected content within the model and its largely verbatim reproduction in outputs may be relevant under copyright law. The judgment also makes clear that responsibility cannot simply be shifted to users. Even though the decision was not yet final and binding at the time of publication, it sends a strong signal to businesses deploying AI systems or exploiting AI-generated content.
These proceedings show that copyright issues relating to training, memorisation and output in generative AI are becoming increasingly concrete and commercially significant. While GEMA v OpenAI has already resulted in a first-instance court decision, the claim brought by Penguin Random House is a further example of rights holders taking action against the recognisable reproduction of protected works and allegedly memorised content. For businesses, this is a clear signal that AI use cases, rights clearance and internal approval processes must be structured not only from a technical, but also from a copyright perspective.
We review your AI use cases, identify legal risks, develop practical policies and contractual solutions, and assist you with the legally compliant introduction or further development of AI within your business.