• Hey Guest, Have an interview coming up? We’ve opened new mock interview slots this week. Book here
  • TCLA Premium: Now half price (£30/month). Applications, interviews, commercial awareness + 700+ examples.
    Join →

TCLA Vacation Scheme Applications Discussion Thread 2025-26

And I would encourage you to read this (as it happens, there is some evidence - at least from my skim read)


And also:


These use data and statistical methodology as part of its validation process. I don't deny WG is flawed, and the article you post shoes some of the flaws but as a test most studies show some validity and have some level of prediction in terms of outcome. My personal experience, is formal logic learning did improve my average score, however this is obviously unsubstantiated.
Of course, I'll be fair to SJTs - they are considered a data-backed predictor of job performance:

Now, my bigger issue with SJTs is this:
1) you end up ranking a situation with responses that you wouldn't do - which would you least do. At this point, you are just picking what the firm wants based on the firm profile. In a sense that is a thinking skill, but not reflective of personality (in the way the test is designed).
2) on slided scales, there are never polar opposites. For example, you could rank "I prefer to plan my work where possible" and "I have the ability to respond to change and work under pressure."
Now, how would somebody who has great ability to respond to change, but personality wise prefers to plan where possible mark themselves? Do they mark themselves in the middle, which could indicate the same as if they are not strong at either, when they are strong at both?
3) the same person answering will get vastly different strengths and weaknesses every time. For me, I disagree with its validity as a predictor of specific personality traits. Overall scores, which firms use, I guess I can agree carries validity.

Anyways, I believe both studies (or at least the first) show inductive and deductice reasoning aspects carry the most weight. And not to sound like a broken record, but Osborne Clarke had a better, more evolved way to measure deductive reasoning than WG (and is the type of deductive reasoning I'd say is more reliable).

The bad and good news, is that what either of us think doesn't matter, as we don't design the tests. However, what I can say is if it's helpful to anybody, is that the critical thinking book (and occasional podcast) did improve my WG scores. And you can disagree with the correlation, but it's simply what I experienced firsthand 🤷‍♂️.
You clearly didn’t read what I linked as it addresses bith poorly calibrated and defined pieces of evidence you linked just now
 
when you talk about time management in an interview, do you mention one particular day when you were busy and had competing deadlines, or do you mention a longer period where you had to juggle multiple things?
You could always do both! Showing that you can manage competing commitments over a longer period of time will demonstrate consistency, but then maybe having a more concrete example of a specific situation might be more memorable. So maybe try to mix both
 
  • Like
Reactions: Abbie Whitlock
You clearly didn’t read what I linked as it addresses bith poorly calibrated and defined pieces of evidence you linked just now
Yes. I understand the concept of construct vadility, and causation etc etc (and it did talk about these types of study, rather than the specific studies - I felt the studies I presented showed slightly better examples than the ones referenced in the paper - although I can't remember the ones in the paper tbh so may be wrong here).

But the fact is it hasn't produced empirical evidence that formal logic training can impact WG scores. Perhaps, combined with other factors I agree that it can - i.e. lack of probabilistic judgement when taking the test, not understanding what the types of questions are looking for. I disagree with formal logic training lowering scores, from personal perspective, however there is no study showing this either way.

Knowing which parts of the test to apply deductive standards, and which parts to apply inductive standards could penalise logical reasoning, however that also comes down understanding the non-technical norms of the test (which can be accomplished through specific understanding of the test, as well as practice tests).

But regardless, it's like economics: all models are wrong, some are useful. There will be flaws in any standardised testing, some more than others. A levels, GCSEs and university exams are much better to go on (but by no means perfect), but when thousands of candidates are meeting the minimum requirements the WG is a very easy to administer test, and shows correlation to performance (reguardless of if it can accurately assess critical thinking). Formal logic training without understanding/practicing the test and knowing where to use inductice and deductive reasoning can theoretically be a disadvantage (due to the ambitious nature of questions), I accept that. But when knowing and understanding the "test logic", I felt it a big help. Again, I'll agree to disagree here, I have absolutely nothing to win or lose in this argument², just saying what helped me with the test, and why I think it's a better (and more reliable) test than SJTs.
 
Personal request here, I wasn't sure whether to ask this on this or another forum but anyway I'd be grateful if somebody with some insight could provide some advice or point me in the right direction.

This application cycle, like my last 2, has been frustrating. Zero assessment centres so far. I'm starting to feel that I might be locked out of this profession, even though I feel my CV is alright. I did law at a good/midway russel group university (think Bristol/Warwick)/etc). I graduated recently. I gained 1:1 marks in the majority of my modules and ended up with a high 2:1 overall. At my university that puts me around the top 10% of my year. I’ve got sustained extracurricular involvement (pro bono, law soc, etc) and I’ve been to plenty of events/fairs and stuff. I’ve got a little work experience too.

Unfortunately my transcript is spotty: I have 4 2:2s including a 0 in my second year. Now, I can explain these results based on solid, verifiable extenuating circumstances, but I'm wondering a little how much firms really take this into account?

I don't think my app answers are the problem because I'm getting progressed to the online test and/or video interview on pretty well nearly all of the firms I've applied to. Moreover, I know I've done well on at least some of these tests. On the Mills & Reeve test my scores were all in the very high bracket, right at the end of the sliders, apart from one aspect which was moderate. Still rejected. I've only been invited to 2 ACs before, both of which I didn't quite make, but now it seems with ACs having already been given out I'm out of luck for this cycle.

I know its a numbers game, so I guess my question is: should I cut my losses and forget about law in the UK or keep at it? Worth saving up for a masters degree or not? I’d rather somebody with actual experience and insight answers this please, or can suggest where/who I might be able to get an opinion or discuss with. Many thanks.
Hi just wanted to say that if you’re getting progressed to the second stage it’s most likely not your application answers nor your grades that are stopping you progressing. Most firms don’t reconsider the first stages after you progress. You might want to change your approach on tests and video interviews/ apply to firms that may not incorporate one or the other if you’re really struggling! Hope this helps.
 
Yes. I understand the concept of construct vadility, and causation etc etc (and it did talk about these types of study, rather than the specific studies - I felt the studies I presented showed slightly better examples than the ones referenced in the paper - although I can't remember the ones in the paper tbh so may be wrong here).

But the fact is it hasn't produced empirical evidence that formal logic training can impact WG scores. Perhaps, combined with other factors I agree that it can - i.e. lack of probabilistic judgement when taking the test, not understanding what the types of questions are looking for. I disagree with formal logic training lowering scores, from personal perspective, however there is no study showing this either way.

Knowing which parts of the test to apply deductive standards, and which parts to apply inductive standards could penalise logical reasoning, however that also comes down understanding the non-technical norms of the test (which can be accomplished through specific understanding of the test, as well as practice tests).

But regardless, it's like economics: all models are wrong, some are useful. There will be flaws in any standardised testing, some more than others. A levels, GCSEs and university exams are much better to go on (but by no means perfect), but when thousands of candidates are meeting the minimum requirements the WG is a very easy to administer test, and shows correlation to performance (reguardless of if it can accurately assess critical thinking). Formal logic training without understanding/practicing the test and knowing where to use inductice and deductive reasoning can theoretically be a disadvantage (due to the ambitious nature of questions), I accept that. But when knowing and understanding the "test logic", I felt it a big help. Again, I'll agree to disagree here, I have absolutely nothing to win or lose in this argument², just saying what helped me with the test, and why I think it's a better (and more reliable) test than SJTs.
I don’t think you’re understanding what you’re arguing here .

The Watson Glaser shows weak subscales / poor internal consistency.

Studies have also shown the watson Glaser essentially has far lower predictive power for on job performance once academic background etc was taken into account .

Once you actually dig into it “empirically “ the Watson Glaser is not what you’re saying it is. I also have not argued for SJT at any point , it’s another screening tool that lazy firms use to screen out candidates.
 

About Us

The Corporate Law Academy (TCLA) was founded in 2018 because we wanted to improve the legal journey. We wanted more transparency and better training. We wanted to form a community of aspiring lawyers who care about becoming the best version of themselves.

Get Our 2026 Vacation Scheme Guide

Nail your vacation scheme applications this year with our latest guide, with sample answers to law firm questions.