Yes. I understand the concept of construct vadility, and causation etc etc (and it did talk about these types of study, rather than the specific studies - I felt the studies I presented showed slightly better examples than the ones referenced in the paper - although I can't remember the ones in the paper tbh so may be wrong here).
But the fact is it hasn't produced empirical evidence that formal logic training can impact WG scores. Perhaps, combined with other factors I agree that it can - i.e. lack of probabilistic judgement when taking the test, not understanding what the types of questions are looking for. I disagree with formal logic training lowering scores, from personal perspective, however there is no study showing this either way.
Knowing which parts of the test to apply deductive standards, and which parts to apply inductive standards could penalise logical reasoning, however that also comes down understanding the non-technical norms of the test (which can be accomplished through specific understanding of the test, as well as practice tests).
But regardless, it's like economics: all models are wrong, some are useful. There will be flaws in any standardised testing, some more than others. A levels, GCSEs and university exams are much better to go on (but by no means perfect), but when thousands of candidates are meeting the minimum requirements the WG is a very easy to administer test, and shows correlation to performance (reguardless of if it can accurately assess critical thinking). Formal logic training without understanding/practicing the test and knowing where to use inductice and deductive reasoning can theoretically be a disadvantage (due to the ambitious nature of questions), I accept that. But when knowing and understanding the "test logic", I felt it a big help. Again, I'll agree to disagree here, I have absolutely nothing to win or lose in this argument², just saying what helped me with the test, and why I think it's a better (and more reliable) test than SJTs.