Privilege, Generative AI & the New Duty of Care

6 minutes

Privilege, Generative AI & the New Duty of Care: What In-House Legal Teams Must Solve Now

How In-House Legal Counsel Can Balance AI Innovation with Privilege, Confidentiality, and Compliance

“What happens to legal professional privilege if you ask ChatGPT to help you draft the memo? Does using AI dissolve the protection clients expect from legal counsel?”

That’s no longer a hypothetical for in-house legal teams. As generative AI tools (ChatGPT, Harvey, Gemini, the list goes on) seep into legal workflows, in-house counsel must reconcile speed, efficiency and innovation with confidentiality, privilege, and risk. How legal teams manage AI in 2025 will define whether they remain trusted guardians or expose their organisations to hidden liabilities.

In this article, we explain why generative AI is disrupting privilege, share and interpret data, and set out my views (and necessary guardrails) for in-house legal teams in this new frontier.

 

In-house legal counsel are adopting fast to the AI shift

  • In the UK, 61% of lawyers now use generative AI in their work, up from 46% earlier in 2025. 
  • In corporate legal departments, AI adoption “climbed from 11% to 41% in a year” in some jurisdictions. 
  • Deloitte predicts that in 2025, two-thirds of organisations will increase generative AI investments, with legal teams in the crosshairs.
  • In the law firm world, 78% of the top 40 UK firms now actively market their use of AI to clients- nearly a 20% year-on-year jump. 

In short: legal teams cannot opt out of AI. The question is how to integrate it safely, ethically, and without losing privilege or client trust.

 

Why AI challenges privilege and confidentiality

The crux is this: using generative AI changes the risk landscape around confidentiality and legal professional privilege (LPP).

a) AI tools are not lawyers

Under English law, LPP protects communications between a client and legal adviser made for the purpose of seeking or giving legal advice. But inputs into generative AI tools (especially public models) often do not qualify, as the “recipient” is not a lawyer. 

That means if you paste a confidential memo into ChatGPT for drafting suggestions, that act might jeopardise privilege.

b) The “hallucination” risk

Even when you use an AI tool, the output can be unpredictable. Research shows that legal AI tools still hallucinate and produce false citations, invented authorities, or erroneous reasoning. One study found that leading legal AI research tools hallucinated between 17% and 33% of the time. 

That has real consequences: lawyers in the UK have already been warned by judges for citing non-existent cases generated by AI, which could lead to court sanctions or worse. 

c) Risk of waiver through inadvertent disclosure

If the input or output is shared with external parties or stored in systems you don’t control, privilege may be waived. AI platforms often log usage and aggregate data, making confidentiality harder to guarantee. 

d) Volume, speed, and oversight

AI accelerates the creation of documents. That means more derivative work being generated quickly, with potential for errors, overlap, or misclassifications of privilege. It demands greater discipline and oversight from in-house legal teams. 

 

My view: In-house must adopt a “duty of care” approach

For General Counsel and in-house legal counsel, the challenge is not whether to adopt AI, but how to do so responsibly. Given these challenges, I believe in-house legal teams must treat AI not as a convenience but as a risk domain akin to cybersecurity or data privacy. That means a duty of care framework: protect privilege, preserve confidentiality, manage risk, while enabling efficiency.

Here are the principles I think every in-house team needs to own:

i) Define formal policy: “which AI tools are approved, by whom, and for what use cases”

You can’t let ad hoc usage run wild. The legal team must vet, classify and approve AI tools (on-prem, closed models, proprietary systems) and set clear rules about when external/public AI may never be used.

ii) Human oversight is non-negotiable

All AI output that might go outside internal drafts must be reviewed by a lawyer. Treat AI as a drafting assistant, not an author. That validation layer is your privilege guard.

iii) Maintain audit trails, versioning and logs

Your legal team must be able to explain what was input, when, by whom, and how the output was used. That traceability is essential if privilege is challenged later.

iv) Use closed/proprietary systems where possible

Open/public models carry more risk. Closed or privately managed AI systems mitigate some confidentiality exposure, though not all. Even there, privilege analysis remains required.

v) Training and awareness across the business

Conduct regular training on AI risks, privilege, confidentiality and how to use AI tools properly. Non-legal teams (product, engineering, compliance) must be aware of boundaries.

vi) Risk classification: red/amber/green work

Not all legal work should go through AI. Reserve “red” matters (sensitive litigation, crisis, regulatory reporting) for traditional drafting. Use AI for “green” or “amber” tiers under supervision.

vii) Periodic review and audit

Once policy is live, review how it’s working. Are teams complying? Are logs being kept? Are outputs safe? Adjust policy as the tech and law evolves.

 

Applying this in practice: illustrations & pitfalls

Let me walk you through how this might play out in real life, and where things go wrong.

Use case: contract drafting with AI

You input a draft agreement and ask AI to generate negotiation points. Legal reviews the output, adjusts, and finalises. That is relatively safe, the AI was used internally, with human oversight, no outside disclosure.

Pitfall scenario: internal memo to external partner

You use generative AI to prepare a memo for a third party. That memo is then shared externally. If raw AI output (with hidden metadata or draft text) leaks, privilege could be lost.

Use case: legal research & summarisation

A team wants to summarise recent cases. Using AI can cut time dramatically but when the output is interwoven with factual context from the matter, a privilege analysis is required.

Pitfall: AI hallucination in court document

A junior lawyer drafts parts of a submission with AI, fails to fully validate a cited case, and it ends up in the filed court document. That raises sanctions, reputational harm, and potential ethical breaches. Already, courts are warning counsel about citation of fake cases. 

 

Data points that underscore urgency

  • The State of the UK Legal Market 2025 reports that 28% of UK legal departments expect their external legal spend to decline this year so in-house legal teams are under pressure to internalise more legal work and deliver productivity. 
  • The AI Legal Report 2025 shows that 76% of corporate legal teams are significantly increasing AI budgets (by 26–33%) to capture productivity gains, yet only 1 in 5 teams describe themselves as “AI mature.” 
  • In a recent survey, law firms and in-house needs are misaligned: many GCs report using AI tools more than their outside counsel, intensifying expectations on in-house functions. 
  • Research on legal AI tools demonstrates that large language models outperform human reviewers on invoice review by substantial margins (92% accuracy vs 72%). That points to real productivity upside for legal operations and cost control. 
  • The “Future of Professionals 2025” survey by Thomson Reuters found that 87% of UK legal professionals believe AI will significantly impact the profession within five years and half expect transformational change in their own departments this year. 

These numbers make one thing clear: AI is not optional. But privilege, confidentiality, and legal risk cannot be afterthoughts.

 

A mindset shift: from tool to trust

I believe in-house legal must move from treating AI as just a productivity tool to regarding it as a trust boundary. Clients, boards, regulators expect legal to act as a gatekeeper of confidentiality and care. That means adopting a mindset where:

  • AI decisions are legal decisions- every use must be assessed through a legal lens
  • The legal function owns the AI overlay- not leaving use cases to business units without legal oversight
  • You build credibility by protecting privilege first, then scaling AI safely- that builds trust internally
  • You anticipate regulation and liability- the smarter teams will build robustness before exposure becomes a crisis

In other words: legal teams must be architects of responsible AI, not passive adopters.

 

Where we go from here: regulation, standards, and liability

Looking ahead, three trends keep me up:

1. Regulatory frameworks will catch up

The proposed EU AI Act, growing pressure for transparency, and regulatory scrutiny will force accountability in how AI is used in legal contexts. In-house legal teams need to be ready for external audit and oversight.

2. Privilege and liability litigation will evolve

Clients or adversaries will inevitably test privilege in the AI context. In-house counsel must be ready to defend usage decisions, logs, versioning and oversight as evidence of care.

3. Integration with legal tech / automation platforms

AI will increasingly plug into CLMs, document management, knowledge systems. That raises data governance, access rights, and integration risk. In-house legal must lead these orchestrations, not follow.

Teams that can balance innovation + control will have a huge competitive advantage. Others risk being dragged into disputes, ethical review, or breach exposure.

 

What every in-house team should do now

If I were running an in-house legal team today, here’s what I’d prioritize in the next 90 days:

  1. Run an AI usage audit- find out who is using AI, for what, and whether logs exist.
  2. Draft a privilege-safe policy- classify tools, approve models, define oversight pathways.
  3. Pilot & test safely- choose lower-risk use cases (contract drafting, summarisation) and validate with lawyers.
  4. Train the business & legal team- privilege awareness, confidentiality, how to use AI tools responsibly.
  5. Implement logging, versioning, and traceability- so usage can be audited or defended.
  6. Define escalation protocol- red cases or high risk require legal sign-off before AI is used.
  7. Monitor evolution- keep an eye on case law (e.g. privilege rulings), AI regulation, and tech advances.

Do this, and you’re not only protecting your clients, you’re building a differentiated, future-facing legal function.

 

The Promise, the Risk, and the Opportunity

Generative AI presents a paradox for in-house legal teams: enormous productivity gains, but correspondingly high risk to privilege and confidentiality. The teams that succeed will be those that treat AI not as a toy, but as a professional risk domain.

If your legal function manages to thread that needle; enable AI for drafting, research, contract automation, while protecting clients, maintaining privilege, and applying human judgment, then you will have earned a new seat at the table: the seat of strategy, control and trust.

Related articles:
[How to Hire Your First In-House Lawyer (and get it right)]
[What the General Counsel of 2026 Will Look Like]
[In-House vs. Law Firm Dichotomy: Culture, Mental Health and The Path To Improvement]