Thanks!6th jan for app and 13th jan for test submitting got the test on 12th
Thanks!6th jan for app and 13th jan for test submitting got the test on 12th
The question is I wonder what they mean by computer skills. Because I am 'good' but I can't do some of the crazy excel stuff I have seen online. Imagine if on the assessment centre day they bring out excel and ask us to do some crazy stuff!I would rate my computer skills 1000/10 because I can type fast, I am proficient in using Microsoft Word, Excel and PowerPoint, and can switch tabs at lightning speed, meaning I can multitask and handle multiple work streams all at once…
![]()
Meanwhile this is the computer skills live in action:
![]()
Do you mind sharing some examples to help me consider the problems of the different types of questions?I've been thinking about Watson Glaser tests recently.
I find them rather curious (and frustrating, like many of us), especially because I have a particular relationship with the kind of "critical thinking" they are supposed to test.
I have taught critical thinking to students at BA and MA level, and have published several book chapters on the philosophy of interpretation. It is very important that I resist the urge to send long emails to grad recruitment teams saying: "Here is my expert opinion on why WG is a poor way of assessing candidates..." But I don't think that will really help my chances of getting a TC! You've got to pick your battles.
TCLA seems like the place to vent about this kind of thing, though, so, time to get on my soapbox...
WG was famously critiqued by Professor Kevin Possin in the journal Informal Logic, Vol. 34/4 (2014), and I have to agree that the methodology has some serious flaws. The issue is neatly summed up in Possin's subtitle: "The more you know, the lower your score". WG has a very limited conception of what "critical thinking" actually involves, and often encourages poor judgement.
Although WG candidates are encouraged to identify a single correct answer, it is often possible to make a strong case that several answers are entirely logical and intelligent responses. Sometimes, the most rigorous logical analysis will, in fact, identify a supposedly "incorrect" answer. I would often expect the best lawyer/critical thinker in the room to get the WG test "wrong", at least for several questions each time.
The reason this happens is actually quite simple. The parameters of each question establish a line where inference should be limited. Although candidates who cross that line are penalised (rightly) for assuming too much when making a judgement, the test also penalises candidates with a more developed critical sense who draw that line at an earlier point in the process of assuming relationships and causal connections. Possin breaks this down in some detail, and I would agree with much of it.
When teaching critical thinking, my students would find it easy to undermine the methodology used for many of the standard WG questions. I wouldn't got so far as to say that WG rewards "less intelligent" or less critical candidates. But what does seem to happen is that in order to succeed, candidates have to become acclimatised to the "rules of a game" - the Watson Glaser game - which does not necessarily equate to an ability to rigorously assess data.
(Mind you, that is not a million miles away from what lawyers actually do. The law of England and Wales is a big book of "game rules" that have been evolving for centuries, by a combination of case law and legislation. So maybe there is some logic in it!)
Ultimately, I am glad to know that many firms use WG tests within a more holistic application system, taking other factors into account. We have seen that many times here on TCLA. A low score is not the end of the world.
In the meantime, I'm just going to answer everything with "insufficient data"
...
Any thoughts about this?
I agree that it’s somewhat arbitrary but I think any critical thinking test will inherently fall into that territory. However I do think it does a good job of testing logical critical thinking and your ability to analyse patterns and make judgement calls, which is probably the best way to assess the critical thinking of thousands of applicants at the early stages of an application. From personal experience, taking the test improved made me realise that I actually made a lot of assumptions or conflated fax with my views.I've been thinking about Watson Glaser tests recently.
I find them rather curious (and frustrating, like many of us), especially because I have a particular relationship with the kind of "critical thinking" they are supposed to test.
I have taught critical thinking to students at BA and MA level, and have published several book chapters on the philosophy of interpretation. It is very important that I resist the urge to send long emails to grad recruitment teams saying: "Here is my expert opinion on why WG is a poor way of assessing candidates..." But I don't think that will really help my chances of getting a TC! You've got to pick your battles.
TCLA seems like the place to vent about this kind of thing, though, so, time to get on my soapbox...
WG was famously critiqued by Professor Kevin Possin in the journal Informal Logic, Vol. 34/4 (2014), and I have to agree that the methodology has some serious flaws. The issue is neatly summed up in Possin's subtitle: "The more you know, the lower your score". WG has a very limited conception of what "critical thinking" actually involves, and often encourages poor judgement.
Although WG candidates are encouraged to identify a single correct answer, it is often possible to make a strong case that several answers are entirely logical and intelligent responses. Sometimes, the most rigorous logical analysis will, in fact, identify a supposedly "incorrect" answer. I would often expect the best lawyer/critical thinker in the room to get the WG test "wrong", at least for several questions each time.
The reason this happens is actually quite simple. The parameters of each question establish a line where inference should be limited. Although candidates who cross that line are penalised (rightly) for assuming too much when making a judgement, the test also penalises candidates with a more developed critical sense who draw that line at an earlier point in the process of assuming relationships and causal connections. Possin breaks this down in some detail, and I would agree with much of it.
When teaching critical thinking, my students would find it easy to undermine the methodology used for many of the standard WG questions. I wouldn't got so far as to say that WG rewards "less intelligent" or less critical candidates. But what does seem to happen is that in order to succeed, candidates have to become acclimatised to the "rules of a game" - the Watson Glaser game - which does not necessarily equate to an ability to rigorously assess data.
(Mind you, that is not a million miles away from what lawyers actually do. The law of England and Wales is a big book of "game rules" that have been evolving for centuries, by a combination of case law and legislation. So maybe there is some logic in it!)
Ultimately, I am glad to know that many firms use WG tests within a more holistic application system, taking other factors into account. We have seen that many times here on TCLA. A low score is not the end of the world.
In the meantime, I'm just going to answer everything with "insufficient data"
...
Any thoughts about this?
I've been thinking about Watson Glaser tests recently.
I find them rather curious (and frustrating, like many of us), especially because I have a particular relationship with the kind of "critical thinking" they are supposed to test.
I have taught critical thinking to students at BA and MA level, and have published several book chapters on the philosophy of interpretation. It is very important that I resist the urge to send long emails to grad recruitment teams saying: "Here is my expert opinion on why WG is a poor way of assessing candidates..." But I don't think that will really help my chances of getting a TC! You've got to pick your battles.
TCLA seems like the place to vent about this kind of thing, though, so, time to get on my soapbox...
WG was famously critiqued by Professor Kevin Possin in the journal Informal Logic, Vol. 34/4 (2014), and I have to agree that the methodology has some serious flaws. The issue is neatly summed up in Possin's subtitle: "The more you know, the lower your score". WG has a very limited conception of what "critical thinking" actually involves, and often encourages poor judgement.
Although WG candidates are encouraged to identify a single correct answer, it is often possible to make a strong case that several answers are entirely logical and intelligent responses. Sometimes, the most rigorous logical analysis will, in fact, identify a supposedly "incorrect" answer. I would often expect the best lawyer/critical thinker in the room to get the WG test "wrong", at least for several questions each time.
The reason this happens is actually quite simple. The parameters of each question establish a line where inference should be limited. Although candidates who cross that line are penalised (rightly) for assuming too much when making a judgement, the test also penalises candidates with a more developed critical sense who draw that line at an earlier point in the process of assuming relationships and causal connections. Possin breaks this down in some detail, and I would agree with much of it.
When teaching critical thinking, my students would find it easy to undermine the methodology used for many of the standard WG questions. I wouldn't got so far as to say that WG rewards "less intelligent" or less critical candidates. But what does seem to happen is that in order to succeed, candidates have to become acclimatised to the "rules of a game" - the Watson Glaser game - which does not necessarily equate to an ability to rigorously assess data.
(Mind you, that is not a million miles away from what lawyers actually do. The law of England and Wales is a big book of "game rules" that have been evolving for centuries, by a combination of case law and legislation. So maybe there is some logic in it!)
Ultimately, I am glad to know that many firms use WG tests within a more holistic application system, taking other factors into account. We have seen that many times here on TCLA. A low score is not the end of the world.
In the meantime, I'm just going to answer everything with "insufficient data"
...
Any thoughts about this?
not as cooked as if you spelled it Brown tbfAs I was submitting my Brown Jacobson application, I realised my work experience had formatted weirdly - like some of my work experience was weirdly indented and the paragraphs were all over the place. Do you think people will sense AI? Am I cooked?
Hey, this is quite interesting actually. Can you share the link to the video and did you take any notes you could share please?I just saw a video explaining how to use the Beckham family drama as a commercial awareness talking point, and @hollieg5 doesn’t know what she’s done by connecting this dot for me.
I had an eyebrow raised, then she said “the governance structure of a global family enterprise” and then spoke about brand and reputational risk. AMAZING!!
Edit: If you have an AC with Linklaters, from memory it may be relevant for a part of the AC ❤️
![]()
My friend applied 5 min before ddl got rej today. So I think no news is good news.I know someone today said that not hearing from HSFK is good news as they screen potential AC candidates and then eventually make a final shortlist, would someone be able to confirm if grad rec said this!! Thanks![]()
Hey, I haven’t seen the video, but saw a LinkedIn post you might find handyHey, this is quite interesting actually. Can you share the link to the video and did you take any notes you could share please?
I broke it down by proficiency with the Office Suite etc, and then referred to specific modules I had previously taken, which included programming modules, and the building and evaluating of AI models.How would you guys answer the question: How do you rate your computer skills?
I asked Grok to make me a better versionI know someone has mentioned this before but why is the Macfarlanes 'rebrand' so funny to me like it's just in bold now

Here is a link to the Possin article (below). I'm not too sure if much has been written about WG in response over the last decade, but the argument holds up pretty well. WG has been around for a long time, and doesn't seem to have changed much over the decades.Do you mind sharing some examples to help me consider the problems of the different types of questions?
heyy congrats!!! do u mind sharing the dates that were available, please xLinklaters AC for Feb, terrified >.<
out of interest can I ask why you're curious about dates?heyy congrats!!! do u mind sharing the dates that were available, please x
did u apply for the pathway to practice or direct tc for fieldfisherHas anyone heard back from these firms - Fieldfisher (post telephone interview), Travers Smith and Browne Jacobson? I am a little concerned as I thought my telephone interview with Fieldfisher went well and would hear back regarding AC start of this month but got no reply yet. The other 2 firms I have not heard back since I submitted my application lol