AI in marketing – Five legal red flags you cannot ignore
AI is quickly changing marketing strategies by allowing companies to create creative materials at previously unheard-of scale, automate client interaction, and create customised content. Notably, AI-assisted activities are subject to the same regulatory framework that governs marketing communications, which covers data security, consumer protection, intellectual property, and online safety.
Recent missteps by global brands – such as the backlash against Coca-Cola's AI holiday campaign and Valentino's 'tacky' AI visuals – prove that poorly governed AI is a reputational liability. For the modern founder, AI is not just a creative tool; it is a governance challenge.
Here are the five critical risks you need to manage:
1. The copyright trap
While generative AI feels like magic, its engine is built on the creative work of others. For an influencer or a brand, using AI content without a strategy is like building a house on someone else’s land.
The 'Shadow Copy' problem
Generative AI models (like Midjourney, ChatGPT, or Sora) are trained on billions of data points, including images and text owned by photographers, writers, and artists.
The Training Risk: Even if the final image looks 'new', the process of scraping that data may have been unauthorised.
The Output Risk: If you prompt an AI to create 'an influencer in a luxury lounge in the style of [Specific Photographer]', the AI might produce a 'derivative work' that is legally too similar to the original, triggering an infringement claim against you, the publisher.
The Reality – Lessons from Getty Images vs Stability AI (2025–2026)
This landmark UK case recently reached a pivotal point. While the High Court clarified that AI models do not 'store' copies of images like a hard drive, it also found that intangible electronic copies can still be considered 'infringing articles' under the Copyright, Designs and Patents Act 1988 (CDPA).
Crucially for brands: The court upheld claims regarding Trade Mark infringement where AI outputs accidentally reproduced Getty’s watermark. This means if your AI-generated ad features a distorted logo or a 'ghost' of a competitor's watermark, you are legally liable for the brand damage.
How influencers and brands can mitigate exposure
To protect your reputation and your revenue, you must move beyond 'blind' AI usage:
Audit your AI vendors – Do not just tick the terms and conditions. Ask: 'Does this tool have a licensed training set (like Adobe Firefly) or is it built on open-web scraping?'
Contractual Indemnities – If you are a brand hiring an influencer who uses AI, your contract must state that the influencer is liable for any copyright claims arising from their tools. If you are the influencer, ensure your AI tool’s 'Pro' license actually grants you commercial usage rights.
The 'Human–in–the–loop' Rule – Never post an AI output directly. A human should check for 'hallucinations' (like distorted logos or recognisable faces of celebrities) that could lead to passing–off or defamation claims.
Register your own IP – If you create something truly original with AI, remember that under current UK law, copyright only protects works created by humans. To protect your AI-assisted brand assets, consider Trade Mark registration for logos and slogans, as these offer stronger protection than uncertain AI copyright.
2. Data protection and profiling
When you use AI to target followers or segment customers, you are processing personal data. Under the UK GDPR and the Data Protection Act 2018, it does not matter if an algorithm made the choice; the legal responsibility sits with the human who deployed it.
The end of 'set and forget' marketing
AI thrives on profiling; using past behaviour to predict future choices. However, UK law gives individuals the right to understand how decisions are being made about them. If your AI marketing tools are 'black boxes' where you cannot explain the logic behind a targeted ad, you are in breach of the Transparency Principle.
The Reality – The £250,000 Imgur fine (February 2026)
The ICO’s recent enforcement action against MediaLab (owner of Imgur) serves as a stark warning. The regulator issued a £247,590 fine for failing to protect children's privacy. The investigation found that the platform allowed children under 13 to use the service without age–verification safeguards, exposing them to harmful content while unlawfully processing their data for years.
The Lesson for Brands: High-risk processing, especially involving children or large-scale profiling, requires active safeguards. The ICO is no longer just 'educating'; it is using its fining powers against platforms that fail the Children's Code standards.
How to mitigate your data risk
To ensure your brand's AI strategy is compliant, you must move beyond basic privacy policies:
Conduct a mandatory DPIA – A Data Protection Impact Assessment is a legal requirement for 'high–risk' processing. If you are using new AI tech to profile consumers, you must document the risks and how you are reducing them before you launch.
Identify your Lawful Basis – Are you relying on 'Consent' or 'Legitimate Interests'? For AI profiling, the rules are strict. If your AI makes 'solely automated decisions' with significant effects, you usually need Explicit Consent.
Audit your 'Age Assurance' – If your brand appeals to a younger demographic, you must implement robust age–verification. As the Imgur case shows, simply stating 'must be 13+' in your terms is not enough.
Enable Human Intervention – Always provide a way for a customer to challenge an AI–driven decision. A human should be able to review and override the algorithm's output.
3. Misleading advertising
UK advertising regulation is 'technology–neutral'. This means the CAP Code (the rulebook for non-broadcast ads) applies with the same force to an AI–generated influencer as it does to a human one.
The 'Miracle' filter trap
The biggest risk for the beauty and wellness sector is efficacy exaggeration. If you use AI to smooth skin, thicken hair, or whiten teeth in a way that the actual product cannot achieve, your ad is a 'red flag'.
The Reality – In 2025 and 2026, the ASA has upheld numerous complaints against cosmetic brands (like Oneade and Skinny Tan) where 'before–and–after' results were deemed misleading because of digital filters or AI enhancements. The regulator's stance is firm: disclaimers like 'results may vary' or 'AI–generated image' cannot rescue an ad that creates a fundamentally false impression of what a product does.
Irresponsibility and the 'Active Ad Monitoring' system
You are no longer just being watched by consumers. In 2026, the ASA is using its own AI–powered Active Ad Monitoring system to scan thousands of social media posts per hour.
Irresponsible Content: This includes AI–generated ads that trivialise sensitive issues (like the Call of Duty airport security ad ban) or those that use AI to sexualise individuals or reinforce harmful stereotypes.
Vulnerable Audiences: If your AI ad uses 'satire' or 'humour' to target people struggling with the cost–of–living (similar to the Coinbase 2026 ban), it will be flagged as irresponsible.
How to mitigate your regulatory risk
To stay compliant while using AI, brands and influencers should adopt these three 'LegalLens' pillars:
Substantiate your claims – Before you post an AI–enhanced image, ensure you hold signed and dated evidence that the 'result' shown is representative of what a real consumer can achieve.
The 'Materially Misled' Test – Ask yourself: 'Would the average consumer buy this if they knew this image was AI–generated?' If the answer is 'no', you need to either change the image or provide a prominent disclosure.
Standardise your 'AI Label' – While there is no 'blanket' law requiring an AI label in the UK yet, the EU AI Act (taking effect in August 2026) will mandate markers for deepfakes and synthetic content. For UK–based brands with cross–border audiences, adopting an 'AI–Generated' watermark now is a best–practice move to avoid 'dark pattern' accusations.
Review your 'Satire' – If your AI content uses humour to address finance, health, or safety, have it reviewed for 'social responsibility'. What feels like a 'funny' AI render to your marketing team might look like 'harmful trivialisation' to the ASA.
4. Deepfakes and the Online Safety Act
The rise of hyper-realistic synthetic media has forced a total overhaul of the UK’s legal framework. If your brand or marketing team uses AI to 'face–swap' or 'voice–clone' an individual without their permission, you are no longer just risking a civil lawsuit, you are risking a police investigation.
The new criminal frontier: Section 138
As of February 2026, the Data (Use and Access) Act 2025 has officially amended the Sexual Offences Act 2003. It is now a specific criminal offence to intentionally create or even request (commission) an intimate deepfake of an adult without their consent.
Creation vs Sharing: Previously, only 'sharing' was illegal. Now, the mere act of generating the image on your laptop, even if you never post it, can lead to a criminal record and up to two years in prison.
The 'Nudification' Ban: The government has also criminalised the design and supply of 'nudification' tools. For brands, this means using any software specifically built to digitally undress or sexualise individuals is strictly prohibited.
The Reality – Ofcom’s 'Priority Offence' status
In early 2026, the Home Office designated the creation of non-consensual intimate images as a 'Priority Offence' under the Online Safety Act. This places a massive 'duty of care' on platforms like Instagram, TikTok, and X to proactively block this content.
For Influencers: If you are found to be 'trolling' a competitor with deepfake content or even 'jokingly' face-swapping a celebrity into your ads, you face a permanent platform ban and potential prosecution. The Ofcom investigation into X’s Grok AI in January 2026 shows that regulators are now holding both the user and the platform accountable for 'harmful' synthetic outputs.
How to mitigate your criminal and brand risk
To protect your business from the 'Deepfake Trap', follow these LegalLens protocols:
The 'Gold Standard' Consent Form – Never use an AI–generated likeness of a real person (including 'micro-influencers' or even your own staff) without a specific AI Usage Consent Agreement. This must explicitly state that they allow their likeness to be digitally altered or synthesised.
Avoid 'People–Picking' tools – Do not use AI apps that allow you to 'choose' a real person’s face to place on a model's body. These tools are increasingly being flagged as illegal 'nudification' or 'harassment' software.
Ethics Training for Marketing Teams – Ensure your team understands the Protection from Harassment Act 1997. In the UK, creating a series of deepfakes of one person (even 'innocent' ones) can be legally classified as a 'course of conduct' amounting to harassment.
Implement 48–hour Takedown Policies – New rules in February 2026 require platforms to remove reported abusive deepfakes within 48 hours. If your brand is victimised by a deepfake, report it to the police and the platform immediately to trigger this fast–track protection.
5. Ethical risk and consumer fairness
In the eyes of UK regulators, an action can be legal but still 'unfair' if it exploits consumer psychology or baked–in biases. As millions of deepfakes circulate this year, 'authenticity' is no longer just a buzzword, but a compliance requirement.
Algorithmic bias and the 'Mirror' effect
AI models are not objective; they mirror the biases of their training data. If your AI–driven marketing tool consistently targets premium products to one demographic while excluding another based on 'proxies' for race, gender, or age, you may be in breach of the Equality Act 2010.
The Reality – In early 2026, the CMA published its 'Foundation Models' update, specifically targeting 'Choice Architecture'. This refers to how AI is used to manipulate 'vulnerable' consumers (such as those in financial distress) by showing them specific ads that exploit their situation.
The 'Cancel Culture' Dividend: A brand found to be using 'exclusionary' AI targeting doesn't just face an investigation; it faces a 'viral' reputational crisis. Consumers in 2026 are increasingly 'AI–literate' and can spot inorganic or biased targeting from a mile away.
The August 2026 'Transparency Deadline'
While the UK takes a 'sectoral' approach, the EU AI Act (August 2026) has significant extraterritorial reach. If you are a UK brand or influencer with an audience in the EU, you will be legally required to label deepfakes and inform users whenever they are interacting with an AI system.
LegalLens Insight: Don't wait for the law to catch up. Leading UK brands are already adopting a 'Common Icon' or watermark for AI content. Transparency isn't a 'confession' of using AI; it is a trust–signal to your audience that you are an ethical player.
How to mitigate ethical and 'fairness' risks
To build a brand that survives the AI-first era, you need a Governance Framework, not just a set of tools:
Implement a 'Human–in–the–Loop' policy – Never allow an AI to make 'solely automated decisions' about pricing or customer access. A human should always be the final gatekeeper for high–stakes marketing assets.
Conduct a Quarterly 'Bias Audit' – Review your AI–generated visuals and targeting data. Ask: 'Does our AI–generated imagery reflect the diversity of our actual community?' If your 'CEO' renders are always men and your 'Assistant' renders are always women, your tool is biased.
Open the 'Contestability' channel – Provide a clear, accessible way for customers to complain or ask, "Why am I seeing this ad?" Under the CMA’s 2026 principles, giving consumers the right to 'contest' an AI decision is a key part of fair dealing.
Adopt the 'Transparency Default' – Use a standard disclosure like 'AI–Assisted' for edited content and 'AI–Generated' for synthetic media. In 2026, 52% of consumers report they will stop engaging with a brand if they suspect 'undisclosed' AI is being used to trick them.
Final thoughts
AI offers transformative opportunities for your brand, but as we have seen, it does not operate in a legal vacuum. From the "Shadow Copy" copyright trap to the criminal implications of the Online Safety framework, the risks are immediate and material.
However, in 2026, compliance is no longer just a "tick-box" exercise; it is a trust signal. Brands and influencers who are transparent about their AI use and proactive about their legal safeguards will win the long-term loyalty of an increasingly AI-literate public.
At LegalLens, we believe the most successful businesses are those that adopt a "Safety–by–Design" approach. By combining robust contractual warranties with a strong ethical governance framework, you can harness the full power of generative AI while keeping your brand’s unique "sync" intact.
Don't wait for a regulatory investigation to audit your AI tools. The best time to build your guardrails was yesterday; the second best time is today.
FAQs
Is AI–generated content protected by copyright in the UK?
Under current UK law, copyright only protects original works created by humans. This means content generated purely by an AI model without significant human creative input may not be eligible for copyright protection. To secure your brand assets, we recommend registering Trade Marks for logos and slogans, as these offer more robust protection for AI–assisted materials.
Can I be sued for using AI–generated images in my marketing?
Yes. If an AI–generated image closely resembles a protected work or accidentally reproduces a trade mark (like a distorted logo or watermark), the person who publishes the image is legally liable for infringement. High–profile cases such as Getty Images vs Stability AI have set a precedent that 'intangible copies' produced by AI can be considered infringing articles under the Copyright, Designs and Patents Act 1988.
Do I have to label AI–generated ads on social media?
While there is no single 'AI labelling law' in the UK yet, the ASA (Advertising Standards Authority) requires all ads to be honest and not misleading. If the absence of an AI label would trick a consumer into believing a result is 'real' (especially in the beauty or wellness sectors), you must disclose it. Furthermore, the EU AI Act taking effect in August 2026 will mandate labels for deepfakes for any brand reaching audiences in the EU.
Is creating a deepfake of a celebrity illegal?
As of February 2026, UK law has been significantly strengthened. Under the Data (Use and Access) Act 2025, it is a criminal offence to intentionally create or commission an intimate deepfake of an adult without their consent. Even generating such content on a private device without sharing it can lead to a criminal record and up to two years in prison. For non–intimate deepfakes, you still risk significant civil claims for defamation or passing–off.
What is a DPIA and do I need one for AI marketing?
A Data Protection Impact Assessment (DPIA) is a legal requirement under the UK GDPR for any 'high–risk' data processing. Since AI marketing often involves complex profiling, behavioural targeting, or the processing of children’s data, most AI deployments will require a mandatory DPIA to identify and mitigate privacy risks before you launch.
How can brands avoid algorithmic bias?
Brands should implement an internal AI governance framework that includes Quarterly Bias Audits. This involves reviewing your AI–generated visuals and targeting data to ensure they reflect the diversity of your actual community and do not unfairly exclude specific groups based on race, gender, or age, which could violate the Equality Act 2010.
Disclaimer
The information provided in this blog post is for general informational and educational purposes only. It is not intended to constitute, and should not be relied upon as, legal, financial, or tax advice. Every influencer partnership and brand campaign is unique, and the legal requirements may vary based on your specific circumstances, jurisdiction, and the nature of the engagement.While we strive to provide accurate and up–to–date information, laws and regulations – particularly those involving the ASA, CMA, and HMRC – are subject to frequent change. We strongly recommend that you consult with a qualified legal professional or a specialist accountant before drafting, signing, or executing any commercial agreements. Use of this website or the information contained herein does not create a lawyer–client relationship between you and LegalLens.