• TCLA Premium: Now half price (£30/month). Applications, interviews, commercial awareness + 700+ examples.
    Join →

TCLA Vacation Scheme Applications Discussion Thread 2025-26

Dannie

Legendary Member
Premium Member
Mar 20, 2024
158
243
Guys...wait. Covington Summer AC WTF WTF WTF. Summer application, applied literally just around the deadline
Finale GIF by GaryVee
CONGRATSSS YAYY When did you apply?
 

trainee4u

Legendary Member
Sep 7, 2023
512
1,050
I just completed the WBD Neurosight test. My first test of this type.

That's an interesting one. You place your mouse in a set position, a question appears, then after 5 seconds a few possible choices appear on screen and you move the mouse to the one that reflects your immediate thoughts. I heard that they track mouse movements to gauge authentic responses.

Makes a nice change from WG, of course!
I asked Gemini for an analysis:


While the general scientific concepts they use (such as reaction time and mouse tracking) are real methods used in cognitive psychology research, the specific application of these methods to predict job performance in this format has not been independently verified by the scientific community. The claims of "3-6x higher validity" appear to be internal marketing statistics rather than published scientific facts.


1. What is the "Science" Behind It?​

Neurosight claims to use "decision dynamics" or "process tracing."

  • The Theory: Traditional tests only score what you choose (A vs. B). Neurosight claims to score how you choose. In academic research, tracking mouse movements (velocity, hesitation, deviation) can indeed reveal "cognitive conflict" (how torn you are between two choices) or "certainty" (how confident you are).

  • The 5-Second Mechanism: The specific "wait 5 seconds" rule you encountered is likely a Forced Delaymechanism.
    • Purpose: This is designed to separate processing time (reading and thinking) from reaction time (moving and clicking). By forcing you to wait, they ensure you have read the word/question before you can physically respond.
    • Cheating Prevention: This also helps prevent "impulsive" clicking or using AI tools to instantly scan and select the "correct" answer, as the user is forced to pause.

2. Is There Evidence It Works for Recruitment?​

While the underlying theory (mouse tracking) is valid for lab experiments, applying it to hiring is a different matter.

  • No Independent Studies: There are no studies in major psychology journals (like the Journal of Applied Psychology) validating that Neurosight's specific method accurately predicts who will be a good employee.
  • The "Black Box" Problem: Because the scoring algorithm is proprietary (secret), we don't know if a "hesitation" is scored as "careful and thoughtful" (good) or "uncertain and indecisive" (bad). This ambiguity makes it difficult to say the test is scientifically "valid" in a universal sense.




The fact that they refuse to publish their "scientific evidence" publicly is a major red flag.

In addition, I'd argue that the forced delay is bad. During the forced delay, I'm thinking about the right answer. For example, one of the questions is "for me, explaining topics in an easy-to-understand way" and you know that the answer will be "comes naturally" or similar, during that 5 seconds, so simply need to move the mouse to that answer when it moves. However, another question is "when a customer raises an issue it's critical to" then you're thinking 'is it going to be something obvious like "address it"?'. In fact, when the answers emerge they are BOTH important, so there is likely to be some hesitation.

I read bro's claims here:


and this is frankly absurd hyperbole, :

"a classic situational judgment test right you might read the scenario think about it for a minute or so then select what the best option is and the worst option is so in 3 minutes you might only have one data point"
"in three minutes in our assessment we collect 106,000 data points"

In the first case you have one meaningful datapoint, and if you collect, say, 10 of those, you can provide a statistically certain range for that candidate's ability to answer those questions. In the latter case, the 106,000 data points, which are mouse movements/timings have no semantic value of their own, and you're trusting that the algorithm to convert these into the actual number of data points (say, 10) that you're providing (the 106,000 claim is just stupid) to the client is valid, and that the "commercially sensitive" studies really show what the salesman says.
and then this insulting nonsense
"we find is that hugely reduces the drop out rate right so instead of 20 to 30% of people not completing the assessment we might get it down to under 1% and what that means is that the people who were dropping out those from lower socio-economic backgrounds didn't have a quiet place to study etc are staying in that Talent Pipeline"

I mean come on bro, if you can't be bothered to complete a 40-minute test, htf do you expect to function as a solicitor or accountant? Sure, you get fewer drop-outs, but who cares, and frankly saying that people from lower socio-economic backgrounds can't do a test is patronising.

"and the court of appeal in 2019 ruled that conventional scenario-based assessments like situation judgment test should not be used to screen out neurodiverse candidates so what we do instead is we use dynamically adaptive algorithms that accommodate different decision-making styles and we use a form of artificial intelligence to achieve that so there's quite a lot going on like under the Bonnet but for the candidate it doesn't feel stressful "

Not true. It was the EAT which is not the CoA, and in 2017, not 2019.

Secondly, that case https://assets.publishing.service.g...l_Service_v_Ms_T_Brookes_UKEAT_0302_16_RN.pdf was fact-specific, where the candidate asked for the reasonable adjustment of giving narrative responses, but was refused. This is not saying SJTs should not be used, it's a finding that in that fact pattern, the candidate was discriminated against.

Thirdly, the point there was that there the claimant was able to prove that the practice is discriminatory because there is evidence for that because SJTs are well-studied.

Here we're expected to take the salesman's word that his product is better, where "better" means "fairer to ND candidates".

However, again asking Gemini:

1. The "Wait 5 Seconds" Rule (Forced Delay)​

This mechanism is designed to force "thinking time," but it can inadvertently penalize specific neurotypes:

  • ADHD (Impulsivity): A core trait of ADHD can be difficulty with forced waiting. Being forced to sit idle for 5 seconds before clicking can cause frustration or a break in focus. If the candidate "fidgets" with the mouse during this wait, the algorithm might interpret this movement as "uncertainty" or "erratic behavior" rather than a neurological need for stimulation.
  • Anxiety/OCD: Candidates with high anxiety or OCD often double-check or over-analyze during a pause. If the algorithm tracks mouse hovering or "jitter" during the 5-second wait, it might flag the candidate as "indecisive" or "lacking confidence," when they are actually just being thorough or managing anxiety.

2. Mouse Tracking (Process Tracing)​

Neurosight claims to analyze how you move your mouse (speed, trajectory, hesitation) to judge your personality or cognitive style. This is highly problematic for several groups:

  • Dyspraxia (DCD): Dyspraxia affects fine motor skills and coordination. A candidate with dyspraxia may have "jerky" mouse movements, overshoot buttons, or take a less direct path to the answer. An algorithm looking for "smooth, confident decision-making" could misinterpret these motor control differences as cognitive confusion or lack of capability.
  • Autism: Autistic individuals sometimes have different processing speeds or may hover over multiple options to literally read them (visual processing) before deciding. "Process tracing" algorithms often assume that hovering over an option means you are "considering" it. If an autistic candidate reads by tracking with their mouse, the system might incorrectly score them as "conflicted" or "unsure."

3. The "Black Box" Problem​

The biggest issue is transparency. Because the scoring is secret (proprietary):

  • You cannot know if "hesitating" is scored as good (thoughtful, careful) or bad (slow, indecisive).
  • A neurodiverse candidate cannot ask for the correct reasonable adjustment (e.g., "don't track my mouse movement, just score my final answer") because they aren't told that their mouse movement is being graded in the first place.




Continuing through bro's sales pitch

"All of our assessments are bespoke for the employer right we don't do off of the shelf and the reason for that is is every employer different set of values competency framework performance Frameworks they want the Assessments in their brands and in their language right so um what we do sometimes is we go to an employer we look at their existing test and we just say to them look we can measure what that test measures with significantly more accuracy so I've talked about construct validity on sjts it's usually about 0.1. We can get up to 0.6 sometimes 0.7 so we're not marginally more accurate we're significantly more accurate "


Again this is absurd.

As I've already observed, when I'm waiting the forced 5 seconds, I'm thinking about the likely best answer, and anticipating the best answer (which in no way references my own values, my goal is to pass a screening test designed to potentially reject me and leave me unemployed and hungry). They're claiming that their tests are all customised to the language and brand values. Subtle differences in language to reflect 'brand values' will affect response times, and while they claim they've done all this testing to prove their algorithm is fair, trust me bro, it doesn't sound like they individually standardise and test for 500 people on a per-employer basis.

And frankly, even if the algorithm was tested using 500 people, when I sat the last one, I'd had two double G&Ts, because as they say in their sales-pitch, it's an easy test that can be sat anywhere any time, so why not do it right away, any time, any place, instead of ensuring you are prepared as you would do for a test that is mentally taxing.

"we found is that at a certain cut off score at a certain cut off score in this [our] sub five minute assessment their higher performers were 320% more likely to pass than their lower performers"

and

" we're seeing Zero adverse impact aligned to socioeconomic status disability ethnicity gender"

lololol. So you have a group of "higher performers", which will tend to be different in terms of its make-up of one or more of the listed characteristics in the second sentence than the "low performers", and you claim your test screens them super-efficiently well, but then you also claim that there is zero adverse impact.

giphy.gif


"they were using the output from the online assessment to inform the interview right, so it wasn't just being used as a screen out - although it was used for that as well - what they do is they hire this Apprentice cohort near 300 of them what percentage do you think pass their probation I'll tell you because 99.7% "

And how does that compare to other apprentice cohorts. What's the statistical significance of that?

Ok so here I am a candidate, sitting your tests - twice now. We get the patronising feedback email afterward, but not the secret "this person is good/bad/indifferent" datathat is really the key to this exercise.

What do you think will happen? I, and other candidates, knowing that the assessment is being used as a PFO filter, will think about how to beat the test. We will alter our behaviours in ways that don't match the candidates in whatever testing they've done.

I can see that if I'm naively treating this as a "simple test", as opposed to the make-or-break critical filter that it actually is, then I am being an idiot. I don't like being an idiot. I will see, for example, that the natural hesitation that might occur when deciding between two choices, the mouse pointer going across the screen, and which candidates in whatever trials have been conducted will exhibit, is bad. I will avoid this. I will simply move directly to the "correct" answer.

I just want to offer this whole bundle of marketing BS, AI and tech mumbo jumbo a big 🖕

Edit:
I asked for a full AI analysis of the entire transcript, my prompt "are the claims plausible?":

1. Claim: Traditional Assessments are "Broken" (Validities of ~0.1)​

Plausibility: Exaggerated
  • The Claim: Betts argues that traditional Situation Judgment Tests (SJTs) and psychometrics have validity coefficients around 0.1 (barely better than a coin toss) and that the industry has "known this for 40 years."
  • The Evidence:This contradicts the gold-standard meta-analyses in psychology.
    • General Mental Ability (GMA): Decades of research (e.g., Schmidt & Hunter, 1998; 2016) consistently show GMA tests have predictive validity coefficients of 0.51 to 0.65 for job performance.
    • SJTs: Meta-analyses typically place Situation Judgment Tests between 0.26 and 0.38.
  • The Verdict: Betts is likely citing the worst performing tests to make his product look better. While bad tests exist, claiming the entire industry operates at 0.1 validity is factually incorrect based on academic literature.

2. Claim: Neurosight achieves Validity of 0.6 to 0.7​

Plausibility: Highly Improbable (in real-world settings)

  • The Claim: Betts states they can get validity up to 0.6 or 0.7, and mentions a correlation of over 0.7 for a law firm study.
  • The Science: In social science, a correlation of 1.0is perfect prediction.
    • 0.3 is considered useful.
    • 0.5 is considered strong.
    • 0.7 is extremely rare in human behavior prediction.
  • Why this is suspicious: A score of 0.7 approaches the theoretical limit of reliability. (If a candidate takes the test twice, they might only correlate 0.8 with themselves). To claim a test correlates 0.7 with job performance suggests the test is almost a perfect crystal ball.
  • The "Overfitting" Risk: High numbers like this usually happen in "Concurrent Validity" studies (testing employees who already work there) where the AI "memorizes" the traits of current staff. This famously degrades when applied to new candidates (Predictive Validity).

3. Claim: "Zero Adverse Impact" (Race, Gender, SES, Disability)​

Plausibility: Unproven / The "Black Box" Risk

  • The Claim: Betts claims "Zero adverse impact aligned to socioeconomic status, disability, ethnicity, gender."
  • The Statistic: In hiring, "Adverse Impact" is usually measured by the "4/5ths Rule." If the selection rate for a minority group is less than 80%of the selection rate for the majority group, adverse impact exists.
    • Standard Cognitive Tests: Often show significant gaps. For example, on some cognitive batteries, Black candidates may score roughly 1.0 standard deviation lower than White candidates on average (d-score), leading to adverse impact ratios well below 0.80.
    • Neurosight's Claim: Betts is claiming an adverse impact ratio of 1.0 (perfect parity).
  • The Risk:While stripping out language helps with Socioeconomic Status (SES) and Ethnicity, the "Zero Disability Impact" claim is the most contentious.
    • Motor Control: As analyzed previously, tracking mouse movements (106,000 data points of movement) inherently relies on motor function.
    • The Contradiction: Betts claims they use "dynamically adaptive algorithms" to fix this. However, without external peer review, it is difficult to prove that an algorithm can distinguish between a candidate who is "indecisive" (a trait they want to penalize) and a candidate who has hand tremors or dyspraxia (a trait they legally must not penalize).

 
Last edited:

Bruce Wayne Attorney at Law

Distinguished Member
Gold Member
Premium Member
Sep 10, 2023
68
109
The PFO fatigue is real. What stings the most in the rejections is when they decide to specify how high quality the applications have been this year....

I don't need to know that unless you are progressing me? Such a slap in the face, honestly. 😅😅😅
the quality of applications seems to be very high every year
 

Abbie Whitlock

Administrator
Staff member
Gold Member
Premium Member
Sep 11, 2025
746
764
Have an AC tomorrow that contains an individual research and discussion task. Does anyone have any insights into what this could entail? I was thinking it would be a contract review exercise but I’m not sure.
Hey!

Whilst I'm not too sure on what the overall structure will look like, I can share some general advice on what you could expect! An individual research and discussion task typically tests how you think, rather than information that you already know.

It can often involve being given a short pack or scenario (e.g. a client, a market issue, a transaction or problem the firm is facing), some time to research and prepare your thoughts and arguments, and then a discussion with an assessor or in a small group (similar to a case study exercise). You might be asked to:
  • Identify key issues or risks
  • Explain your reasoning and highlight any assumptions
  • Discuss possible solutions / options, and any trade-offs
  • Respond to follow up questions (including challenges!)

The graduate recruitment team are usually looking for structured thinking, commercial awareness, and clear communication, rather than technical knowledge on any area. If you do get something document-based, it's more likely to be based on spotting issues and prioritising, rather than a deep analysis.

I'd say that the best way to prepare is to practice structuring your thoughts, staying clam under pressure, and explaining your reasoning out loud (and dealing with having your opinions challenged).

Best of luck!! :)
 
  • Like
Reactions: Legal12345

Abbie Whitlock

Administrator
Staff member
Gold Member
Premium Member
Sep 11, 2025
746
764
Hey guys! Just looking for some advice. Is it worth applying to firms for DTC if i have never done a VS (at any firm) and have some (but not extensive) legal experience?

Previously i thought it would be a waste of time but would love to know others thoughts?
Hey!

This is a great question! Whilst it is likely to be firm dependent, I would still encourage you to apply via the DTC route if you have some legal experience and feel that would be better suited than the VS. I applied through the DTC route at my firm and, whilst I had a mix of VS and paralegal experience, many of the other DTC individuals in my cohort had no legal experience at all - it's often about how well you are able to perform at the AC, rather than your background :)
 

BealMcAlly

Legendary Member
Gold Member
Premium Member
Feb 3, 2025
266
342
I think they say it to be reassuring but it is actually annoying because they send it to everyone. The implication is "hey even though we did not progress with you its not your fault - standard were just high" but I would respect it more if they could tell me WHY they are not progressing. Otherwise they can PFO
I fully get why they include this in but I just don't think there is a need unless they are going to delve into specifics. I had a similar case last cycle for a DTC claiming my application was placed in the top 3rd... okay? That means nothing to me unless I am progressing. 😅

It doesn't help me knowing nothing about why I was rejected compared to their EXTREMELY amazing candidates they progresed. Was I close, was I way off? 😅

I would be okay with them just saying 'Unfortunately we are not progressing you' and moving on.
 

Abbie Whitlock

Administrator
Staff member
Gold Member
Premium Member
Sep 11, 2025
746
764
can you include subheadings + bullet points in a client email or is that better for an internal research memo?
Hi!

In previous written exercises I have completed, I have used both subheadings and bullet points for client emails. They can work very well to improve clarity and readability - especially if you are summarising points, key risks, or next steps! The key is to use them sparingly and to balance them with clear prose, so the email still feels professional and client-friendly.

For an internal research memo, subheadings and bullet points are much more standard and can be used more extensively, as the audience is internal and the purpose is often to convey information efficiently and in a structured way.

I hope that assists! :)
 

About Us

The Corporate Law Academy (TCLA) was founded in 2018 because we wanted to improve the legal journey. We wanted more transparency and better training. We wanted to form a community of aspiring lawyers who care about becoming the best version of themselves.

Get Our 2026 Vacation Scheme Guide

Nail your vacation scheme applications this year with our latest guide, with sample answers to law firm questions.