AI and Intellectual Property: Protecting Your Trade Secrets in the Age of LLMs
UK businesses feeding confidential data into AI tools risk exposing trade secrets, customer data, and competitive advantages. A practical guide to protecting intellectual property while still using AI effectively — covering data policies, secure deployment, and IP ownership.
AI and Intellectual Property: Protecting Your Trade Secrets in the Age of LLMs
Here's a scenario that plays out in UK businesses every day: an engineer pastes proprietary formulation data into ChatGPT to help optimise a process. A sales director uploads a confidential pricing model to an AI analysis tool. A lawyer feeds client contracts into an AI summariser.
Each of these actions potentially exposes trade secrets, breaches client confidentiality, and creates intellectual property risks that didn't exist three years ago.
AI is incredibly useful. It's also the biggest accidental data leak channel most businesses have ever introduced. This guide covers how to get the benefits without giving away the crown jewels.
The IP Risks Most Businesses Aren't Thinking About
1. Training Data Exposure
When you use a public AI tool (the free tier of ChatGPT, Claude, Gemini, etc.), your inputs may be used to improve the model. That means:
- Your confidential data could influence responses given to other users — including competitors
- Proprietary processes, formulations, and strategies become part of a training dataset you don't control
- Client confidential information gets shared with a third party without consent
Most enterprise-tier AI services (ChatGPT Enterprise, Claude for Business, Azure OpenAI) don't train on your data. But the free tiers and consumer versions often do — and that's what your employees are actually using.
The uncomfortable truth: If you haven't given your team approved AI tools, they're using unapproved ones. And those unapproved tools are almost certainly the ones with the weakest data protections.
2. AI-Generated Content and IP Ownership
Who owns the output of an AI tool? Under current UK law, it's genuinely unclear:
- Copyright in AI outputs: The UK Copyright, Designs and Patents Act 1988 (Section 9(3)) provides that computer-generated works are authored by "the person by whom the arrangements necessary for the creation of the work are made." But who is that — the user, the AI provider, or the person who trained the model?
- Patent implications: The UK Supreme Court ruled in the Thaler v Comptroller-General case that an AI cannot be named as an inventor. But what about AI-assisted inventions where a human guided the process?
- Trade secret dilution: If your competitive advantage is a process and you describe that process to an AI tool, have you taken "reasonable steps" to protect the secret? The Trade Secrets (Enforcement, etc.) Regulations 2018 require demonstrable protective measures.
Practical impact: If you're relying on AI-generated content, code, or designs as part of your business, you need clarity on ownership — especially before investment, acquisition, or litigation.
3. Competitive Intelligence Leakage
This is the one that keeps strategists up at night:
- Employees ask AI tools to analyse competitor strategies, inadvertently revealing what your company considers competitively important
- Prompt patterns reveal intentions — asking "how should we price against [competitor]?" tells the AI provider exactly who you're competing with and on what basis
- Aggregated query data from multiple employees at the same company creates a detailed picture of strategic priorities
Even if the AI provider doesn't train on your data, they often retain prompts for abuse monitoring, safety, and debugging. Your competitive strategy lives in someone else's logs.
4. Client and Customer Data Risks
If your business handles client confidential information — legal, financial, medical, or commercial — the risks multiply:
- Professional duty of confidentiality may prohibit sharing client data with AI tools, regardless of the tool's privacy policy
- Data processing agreements with clients may not cover AI tool usage
- GDPR implications — personal data processed by AI tools needs a lawful basis, and "legitimate interest" may not cover it when the processing includes sending data to a US-based AI provider
A Practical IP Protection Framework
Tier 1: Classification (What Can Go Where)
Create a simple data classification system:
🟢 Green — Public / Low Sensitivity
- Published marketing content
- Public product specifications
- General industry knowledge
- Non-confidential internal communications
These can be used with any AI tool, including free tiers.
🟡 Amber — Internal / Medium Sensitivity
- Internal processes and procedures
- Non-public financial information
- Employee data (anonymised)
- Draft strategies and plans
These should only be used with enterprise-tier AI tools that have no-training guarantees and appropriate data processing agreements.
🔴 Red — Confidential / High Sensitivity
- Trade secrets and proprietary formulations
- Client confidential information
- Unpublished IP (inventions, designs, code)
- Pricing strategies and competitive analysis
- M&A information
- Personal data (identifiable)
These should only be processed by self-hosted AI models or tools with the highest level of data protection. Many should never be shared with external AI tools.
Tier 2: Approved Tools and Deployment Models
Match your tools to your data sensitivity:
For Green data:
- Any reputable AI tool
- Consumer and free tiers acceptable
- Minimal governance required
For Amber data:
- Enterprise-tier AI services only (ChatGPT Enterprise, Claude for Business, Azure OpenAI, Google Vertex AI)
- Data processing agreements in place
- No-training guarantees confirmed in writing
- Data residency requirements met (UK/EU where needed)
For Red data:
- Self-hosted models (Ollama, vLLM, Azure Private Endpoints)
- On-premise deployment where data never leaves your infrastructure
- Air-gapped systems for the most sensitive applications
- Or simply don't use AI for this data — sometimes the answer is no
Tier 3: Policies and Training
Create an AI Acceptable Use Policy that covers:
- What data can be shared with AI tools — reference the classification system
- Which tools are approved — maintain an approved list with regular reviews
- How to handle AI outputs — review requirements before using AI-generated content in client deliverables
- Incident reporting — what to do if someone accidentally shares sensitive data with an unapproved tool
- IP ownership — who owns AI-assisted work product created during employment
Training should be practical, not theoretical:
- Show employees what happens to data in different AI tiers
- Use real examples relevant to their roles
- Make the approved tools easy to access — people use shadow AI because the official route is too hard
- Update training as tools and policies change
Tier 4: Technical Controls
Policies alone aren't enough. Technical controls enforce them:
- DLP (Data Loss Prevention) tools that monitor and block sensitive data being pasted into AI chatbots
- API gateway controls that route AI requests through monitored, approved endpoints
- Browser extensions that warn users when they're about to paste classified data into a public AI tool
- Network-level blocks on unapproved AI services (with explanations of why, and easy access to approved alternatives)
- Audit logging of AI tool usage — not to punish, but to identify risks and improve training
Protecting AI-Generated IP
If you're creating valuable outputs with AI assistance, protect them:
Code
- Review and modify AI-generated code before incorporating it — this strengthens your claim to authorship
- Document the human contribution — what did the developer specify, review, modify, and test?
- Check for licence contamination — AI code suggestions may reflect open-source code with copyleft licences
- Keep records of the iterative process — prompts, refinements, and human decisions that shaped the output
Content
- Edit and add original insight — AI-drafted content that's substantially modified by a human has a stronger copyright claim
- Don't publish raw AI output as if it's original thought leadership — aside from IP concerns, it's usually obvious
- Attribute appropriately where your industry or professional standards require it
- Register where possible — UK copyright exists automatically on creation, but maintaining records of creation supports enforcement
Designs and Inventions
- Document the human inventive step — if AI assisted in reaching an invention, record exactly how human creativity directed the process
- File patent applications early — the inventive contribution must be identified as coming from a natural person
- Keep the AI tool's contribution bounded — use AI for research, analysis, and optimisation, but ensure the inventive concept originates from human insight
What UK Law Currently Says
The UK's position on AI and IP is evolving, but as of early 2026:
Copyright:
- Computer-generated works get copyright protection under Section 9(3) CDPA 1988
- The author is "the person by whom the arrangements necessary for the creation of the work are made"
- This likely means the person who prompted and directed the AI, but there's no definitive case law yet
- The UK has not followed the US Copyright Office's position that AI-generated content is uncopyrightable — the UK approach is more nuanced
Patents:
- AI cannot be named as an inventor (Thaler v Comptroller-General [2023])
- AI-assisted inventions can be patented if a human inventor is identified
- The inventive step must be attributable to a human
Trade Secrets:
- Trade Secrets Regulations 2018 require "reasonable steps" to maintain secrecy
- Sharing secrets with AI tools without adequate protections could undermine trade secret status
- This is a genuine risk — if you can't demonstrate you took reasonable steps to protect the information, you may lose legal protection for it
Data Protection:
- GDPR/UK GDPR applies to personal data processed by AI tools
- AI providers are typically data processors — you need appropriate agreements
- International data transfers (UK to US) need adequate safeguards
- The ICO is actively investigating AI data processing practices
Practical Steps for UK SMEs
You don't need a legal department to get this right. Here's a pragmatic approach:
This Week
- Audit current AI tool usage — ask your team what they're actually using (no judgement)
- Identify the biggest risks — where is sensitive data most likely being shared with AI?
- Provide approved alternatives — give people enterprise-tier tools so they stop using consumer ones
This Month
- Create a simple AI policy — one page, clear classifications, approved tools list
- Brief the team — 30-minute session covering what's OK, what's not, and why
- Review client contracts — do your agreements with clients permit AI tool usage for their data?
This Quarter
- Implement technical controls — DLP tools, API gateways, approved browser extensions
- Review IP ownership — update employment contracts to clarify AI-assisted work product ownership
- Establish an audit cycle — quarterly review of AI tool usage, policies, and emerging risks
Ongoing
- Stay current — UK AI regulation is evolving; the AI Safety Institute's work and potential legislation will change the landscape
- Review and update — policies should be living documents, updated as tools and law change
- Train continuously — new tools and new employees need ongoing education
The Balance: Protection Without Paralysis
The worst response to AI IP risks is banning AI entirely. Your competitors won't, and you'll fall behind while they capture the productivity gains.
The right response is informed, proportionate risk management:
- Use AI aggressively for low-sensitivity work — marketing content, general research, code scaffolding, process documentation
- Use AI carefully for medium-sensitivity work — with enterprise-tier tools and appropriate agreements
- Use AI sparingly or not at all for high-sensitivity work — trade secrets, client confidentials, unpublished IP
The businesses that get this right won't just protect their IP — they'll build a competitive advantage from responsible AI use while competitors are still figuring out their policies.
Caversham Digital helps UK businesses develop AI governance frameworks that protect intellectual property while enabling innovation. Get in touch to discuss your AI IP strategy.
