AI in Training Contract Applications: Which Law Firms Allow It, Which Ban It, and What Happens If You Get Caught
- 1The state of play in 2026
- 2What firms have actually said publicly
- 3Magic circle firms
- 4Silver circle and large UK firms
- 5US firms in London
- 6Can firms actually detect AI?
- 7What rejection actually looks like
- 8The legitimate uses of AI in applications
- 9The strategic argument for not using AI
- 10How to use AI legitimately in your TC research
- 11Firm-by-firm summary table
- 12The bigger picture
AI in Training Contract Applications: Which Law Firms Allow It, Which Ban It, and What Happens If You Get Caught
Somewhere between 50% and 80% of law school students now use AI when writing TC applications. Recruiters know this. Most are not okay with it. And yet almost nobody in this industry is giving applicants a clear, honest account of where the lines actually are.
This article is that account.
We cover which UK law firms have published explicit policies on AI use in applications, which have quietly introduced AI detection, what the research says about recruiter attitudes, and (critically) the difference between using AI in ways that will sink your application and using it in ways that are entirely legitimate.
The state of play in 2026
The legal industry's relationship with AI is genuinely complicated. On one hand, firms like Linklaters, Allen & Overy (now A&O Shearman), Travers Smith and many others are spending hundreds of millions of pounds integrating AI into their legal work. Some have their own proprietary large language models. Trainees are expected to know how to use these tools from day one.
On the other hand, those same firms are reading TC cover letters looking for signs of AI generation and rejecting candidates when they find them.
The apparent contradiction resolves when you understand what firms are actually testing for. A TC application is not a test of your ability to produce competent prose. It is a test of whether you can think clearly, articulate your own reasoning, and demonstrate genuine interest in this specific firm. AI-generated answers fail on the last two counts almost by definition.
What firms have actually said publicly
Magic circle firms
Linklaters published internal guidance in 2024 stating that applications showing "generic AI-generated content" would be automatically deprioritised. A spokesperson told Legal Cheek that the firm was investing in detection tools while simultaneously "welcoming candidates who can demonstrate thoughtful use of AI as a research and drafting aid." It is a distinction that acknowledges nuance exists, but the bar for what counts as a "drafting aid" is high.
Clifford Chance has taken a similar position. Its graduate recruitment team has noted publicly that assessors are "trained to recognise AI-generated writing patterns" and that applications scoring low on personalisation metrics are flagged for closer review. The firm has been notably forward-thinking on AI in legal practice (it launched its own CC AI platform in 2023) while remaining firm that application essays must reflect "the candidate's own voice and genuine experience."
Freshfields has not published a formal policy but its graduate recruitment team has noted in webinars that they look for "evidence of authentic reflection." The firm's training partner programme includes AI literacy as a core competency from 2025, which creates an odd dynamic: you need to show you know how to use AI without appearing to have used it to write your application.
Allen & Overy (A&O Shearman) merged with Shearman & Sterling in 2024 and has since been particularly active in AI integration, having invested heavily in Harvey AI. Its graduate recruitment guidance states applications should be "your own work" and that "cut-and-paste from AI tools is not acceptable," but stops short of calling all AI use cheating.
Slaughter and May is the most conservative of the magic circle on this issue. The firm has stated explicitly that AI-generated content in applications is treated as a form of academic dishonesty equivalent to plagiarism. Several candidates have reported being told their applications were rejected specifically for AI use, though Slaughter and May does not publicise these cases.
Silver circle and large UK firms
Herbert Smith Freehills has been the most transparent of any major firm. In 2025, its graduate team published a brief explainer on LinkedIn setting out a clear framework: using AI to research the firm, brainstorm ideas, or check grammar is fine; using AI to write your answers is not. The post was widely shared in TC application circles and remains one of the clearest public statements from any firm.
Travers Smith has a well-documented progressive approach to AI in legal practice. Its graduate recruitment team has said candidates "need not hide" that they use AI tools generally, but that application answers must reflect the candidate's own thinking. The firm specifically flagged that it can tell when answers have been "coached by a chatbot" because they lack the kind of specific detail and genuine enthusiasm that comes from actual experience of the firm.
Mishcon de Reya (which has its own AI division, mTech) takes a nuanced line. It has said that demonstrating AI literacy in applications is a positive signal but that generic AI output is a negative one. The question it asks: can you show you used AI as a tool to enhance your own thinking, or did AI do the thinking for you?
Simmons & Simmons and Eversheds Sutherland have both mentioned in recruitment events that they use plagiarism-detection style tools that have been updated to flag likely AI content. Neither has published formal policies.
US firms in London
American firms with London offices vary considerably. Broadly, the larger and more tech-forward the firm, the more ambivalent it tends to be about AI in applications, though "ambivalent" rarely means "encouraging."
Kirkland & Ellis, Latham & Watkins and Skadden have not published specific AI policies for London TC applications, and their application portals do not include AI-related declarations. However, recruiters at all three have made comments in law school events suggesting they look for "authentic engagement" with why applicants specifically want to work at that firm, and AI tends to produce poor answers to that question.
Sullivan & Cromwell is known to be particularly demanding on application specificity. The firm expects detailed knowledge of recent deals and matters, and generic AI answers about "sophisticated cross-border transactions" are easily identified.
Cleary Gottlieb has said in published Q&As that it does not prohibit AI use in applications per se but that "content which is clearly not the applicant's own words will be obvious to experienced readers."
Can firms actually detect AI?
Yes, to a meaningful degree, though not perfectly.
The main tools in use are:
Turnitin updated its system in 2023 to include AI detection alongside plagiarism detection. It is widely used in law school assessments and an increasing number of firms have licensed access to the same technology for screening application materials.
GPTZero and Originality.ai are standalone AI detection tools that graduate recruitment teams at several firms have confirmed using, either for screening or for targeted checks on flagged applications.
Human judgment remains the most reliable and most-used method. Experienced graduate recruiters read thousands of applications per cycle and develop a strong intuition for what AI-generated text looks like. Common tells include: overly balanced structures (every paragraph has exactly three supporting points), generic enthusiasm (lots of "passion for" and "commitment to" without specific examples), formulaic transitions, and answers that hit the right keywords but say nothing distinctive about the firm.
The detection problem is asymmetric: firms do not need to be certain an answer was AI-generated to reject it. "This reads like ChatGPT" is sufficient reason for an experienced recruiter to set an application aside, even if they cannot prove it.
One important note: AI detectors have high false-positive rates. Non-native English speakers, candidates with a formal writing style, and applicants from certain educational backgrounds can all trigger AI flags unfairly. Most firms claim to use detection as a prompt for human review rather than automatic rejection, but the practical effect can be the same.
What rejection actually looks like
Firms rarely tell candidates they were rejected because of suspected AI use. The rejection is usually framed as "your application was not taken forward after assessment" or similar. This makes it impossible to know your true rejection rate from AI detection specifically.
However, a number of candidates have shared experiences (on Roll on Friday, Legal Cheek forums, and Reddit's r/uklaw) of receiving unusually fast rejections after submitting applications, under 24 hours and before any human could plausibly have read them. This suggests automated screening is happening at some firms.
The consequences if you are caught at a later stage, for example if AI use is identified during an interview or vetting stage, are more serious. At minimum, your application is withdrawn. At worst, it could be reported to your law school and potentially the SRA. This has not yet happened publicly at scale, but the regulatory framework for AI misrepresentation in professional contexts is still being developed.
The legitimate uses of AI in applications
This is where most coverage of this topic fails candidates by being too vague. Here is a specific breakdown of what is and is not reasonable.
Legitimate:
- Using AI to research a firm's recent deals, practice areas, and news
- Using AI to understand unfamiliar legal concepts mentioned in a job description
- Using AI to brainstorm which of your experiences might be relevant to a particular question
- Asking AI to identify weaknesses in a draft you wrote yourself ("What is unclear in this paragraph?")
- Using grammar and spell-checking tools including AI-powered ones (Grammarly, etc.)
- Asking AI to suggest alternative phrasing for a sentence you wrote but feel is clunky
Not legitimate:
- Asking AI to write an answer to a competency question
- Pasting a question into ChatGPT and submitting the output
- Using AI to write a first draft that you then lightly edit
- Asking AI to write "as if you were a law student applying to [firm]"
- Using AI to generate a cover letter with your name inserted
The line is authorship, not assistance. If the thinking and the experience are yours, and you used AI to help you express or organise them, that is in a similar category to using a dictionary or asking a friend to read your draft. If the thinking is AI's, the application is not yours.
The strategic argument for not using AI
Beyond the ethical and detection-risk arguments, there is a straightforward strategic case for not using AI to write TC applications.
Magic circle and top US firm TC applications ask highly specific questions: "Why Clifford Chance specifically?" "Describe a deal that interested you and what it tells you about our practice." "Tell us about a time you worked under pressure." These questions have correct answers, and those answers depend on you having done genuine research, having had genuine experiences, and having genuine reasons for wanting to work at this firm specifically.
AI cannot give correct answers to these questions. It can only give plausible-sounding answers. And plausible-sounding is exactly what experienced recruiters have learned to see through.
The firms offering training contracts are not offering them to the person who produces the most professionally written application. They are offering them to the person whose application reveals the most evidence of the qualities they want in a trainee. That evidence has to come from you.
How to use AI legitimately in your TC research
Here is a practical workflow that gets you the research benefits of AI without any of the risks:
-
Use AI to map a firm's practice areas. Ask it to explain the difference between a firm's structured finance and capital markets practices, or what a recent landmark case means for the firm's reputation. Use this to inform your own research, then go and read the actual sources.
-
Use AI to prepare for competency questions. Ask it to list common TC competency questions, explain what assessors look for, and give you a framework (like STAR). Then write your answers yourself.
-
Use AI to fact-check your application. Paste in a completed draft and ask "Are there any factual claims in this that could be wrong?" This is editing assistance, not authorship.
-
Use AI to simulate interview questions. Ask it to quiz you on your application, your knowledge of the firm, and your commercial awareness. It is a useful practice partner.
-
Use AI to research SQE options. If you are self-funding, use AI to understand your choices around providers, costs and timelines. Our SQE revision timetable tool and provider comparison pages are also built for this.
Firm-by-firm summary table
| Firm | Published AI policy? | Known to use detection? | Stance |
|---|---|---|---|
| Slaughter and May | Yes, explicit ban | Likely | Zero tolerance |
| Clifford Chance | Partial, public comments | Yes | Personalisation is critical; AI output rejected |
| Linklaters | Partial, public comments | Yes | AI as research tool fine; AI-written answers not |
| Freshfields | No formal policy | Unknown | "Authentic reflection" required |
| A&O Shearman | Guidance notes | Unknown | Own work required; cut and paste not acceptable |
| Herbert Smith Freehills | Yes, clear LinkedIn post | Unknown | Most transparent; research use fine, writing not |
| Travers Smith | Partial, public comments | Unknown | Nuanced; AI thinking visible and unwelcome |
| Mishcon de Reya | Partial | Unknown | AI literacy positive; AI output negative |
| Kirkland & Ellis | No | Unknown | No policy but specificity expected |
| Latham & Watkins | No | Unknown | No policy but firm-specific answers expected |
The bigger picture
The irony of this moment is not lost on people watching the legal industry carefully. Firms are asking trainees to master AI tools, investing billions in AI-powered legal work, and hiring Chief AI Officers, while simultaneously screening TC applications for signs that candidates used AI to help them write a cover letter.
The distinction is coherent, even if it is uncomfortable. Using AI to augment your legal reasoning on a matter is a skill. Having AI write a personal statement is outsourcing your identity. Firms are not against the former. They are against the latter.
For now, the practical advice is straightforward: use AI to research, not to write. Build your application from your own experiences and your own analysis of the firm. And remember that the person reading your application has read ten thousand others, and they will know.
Related reading: How to structure your SQE2 revision · Training contract vs QWE: which route is right for you? · Firms that retain their SQE trainees
Free revision timetable
Build a personalised day-by-day SQE study plan based on your exam date and weekly hours.
Share this article
Written by The Qualified Path Team
The Qualified Path team is dedicated to providing accurate, up-to-date guidance for aspiring solicitors. Our content is thoroughly researched and regularly updated to reflect the latest SRA requirements and best practices.
Related Articles
CPS Legal Trainee Scheme 2026: The Complete Guide
The complete guide to the CPS Legal Trainee Scheme 2026: salary (£30,700 national, £32,110 London), 80 places, application stages, what trainees actually do, and whether it is right for you.
Starting Your Training Contract: What People Wish They Had Known
Real advice from trainees and supervisors on how to make a strong start in your training contract - from managing your workload to building a reputation that follows you in the right way.
Vacation Scheme vs Direct Training Contract: Which Should You Apply For?
Most candidates apply to both, but the balance matters. Here is how vacation schemes and direct TC routes actually work, which firms use which, and how to build a strategy that makes sense for your situation.
Still unsure how to approach this?
I offer structured 1:1 SQE strategy sessions - 45 minutes, online. Whether you're deciding on a provider or want a second opinion on your study plan.
Found This Helpful?
Explore more resources and use our calculators to plan your SQE journey.