[1] http://ollieorange2.wordpress.com/2014/06/20/the-two-kinds-of-effect-size/

“So, to sum up, the two major players using the ‘the Effect Size’, John Hattie and the Education Endowment Foundation, are actually using it to mean two completely different calculations, one actual and one relative.

For one of them, anything above 0.40 would be ‘good’[Hattie] , the other anything above zero would be ‘good’ [EEF].”

[2] http://ollieorange2.wordpress.com/2014/03/23/people-who-think-probabilities-can-be-negative-shouldnt-write-books-on-statistics/

John Hattie admits that half of the Statistics in Visible Learning are wrong


“At the researchED conference in September 2013, Professor Robert Coe, Professor of Education at Durham University, said that John Hattie’s book, ‘Visible Learning’,  is “riddled with errors”.


But what are some of those errors?


The biggest mistake Hattie makes is with the CLE statistic that he uses throughout the book.


In ‘Visible Learning, Hattie only uses two statistics, the ‘Effect Size’ and the CLE (neither of which Mathematicians use).


The CLE is meant to be a probability, yet Hattie has it at values between -49% and 219%. Now a probability can’t be negative or more than 100% as any Year 7 will tell you.


This was first spotted and pointed out to him by Arne Kare Topphol, an Associate Professor at the University of Volga and his class who sent Hattie an email.


In his first reply –  here , Hattie completely misses the point about probability being negative and claims he actually used a different version of the CLE than the one he actually referenced (by McGraw and Wong). This makes his academic referencing, hmm, the word I’m going to use here is ‘interesting’.


In his second reply –  here , Hattie reluctantly acknowledges that the CLE has in fact been calculated incorrectly throughout the book but brushes it off as no big deal that out of two statistics in the book he has calculated one incorrectly.


There are several worrying aspects to this –


Firstly, it took 3 years for the mistake to be noticed, and it’s not as though it’s a subtle statistical error that only a Mathematician would spot, he has probability as negative for goodness sake. Presumably, the entire Educational Research community read the book when it came out and they all completely missed it. So, the question must be asked, who is checking John Hattie’s work? As a Bachelor of Arts is he capable of spotting Mathematical errors himself?


In Mathematics, new or unproven work is handed over to unbiased judges who go through it with a fine toothcomb before it is considered to have the stamp of approval of the Mathematical community. Who is performing this function for the Educational community?


Secondly, despite the fact that John Hattie has presumably known about this error since last year there has been no publicity telling people that part of the book is wrong and should not be used. Surely he could have found time between flying round the world to his many Visible Learning conferences to squeeze in a quick announcement.”


As one of the letter writer’s stepfather, a Professor of Statistics said

“People who don’t know that Probability can’t be negative, shouldn’t write books on Statistics”

Sources -

Boo2k review – Visible Learning by @twistedsq

Can we trust educational research? – (“Visible Learning”: Problems with the evidence)



[2] Summary

The book is difficult to read from cover to cover, but I could see that it would be useful as a reference. You want to know about a particular intervention, say mixed-sex versus single-sex schools, and you flick to the appropriate section to see the conclusion: no major effect. You look up tracking (aka setting or streaming by ability): no major effect.

However, given all above the methodological issues, I do not feel that I can trust any of the averaged effect size results in this book without digging further into the original meta analyses to check that they have been combined appropriately.

These are not side issues: they are core problems with the approach of the book.

The averaging of effect sizes across meta analyses (and then comparing these averages) is the key technique by which Hattie judges what works and what doesn’t, and thus forms his narrative about what is important in education.

For example, on page 243, Hattie compares the average effect sizes for the “teacher as activator” techniques he has analysed against those for “teacher as facilitator”. On the basis that the former are higher, he concludes “These results show that active and guided instruction is much more effective than unguided, facilitative instruction”. He’s not necessarily wrong, but if we cannot trust the average effect sizes he gives as evidence, and cannot sensibly compare them, we cannot make that conclusion from this data. In which case, the book is not much use as an argument or a useful summary of the data, just as an impressive catalogue of the original meta-analyses.

http://academiccomputing.wordpress.com/2013/08/05/book-review-visible-learning/


[3] “Today, then, I want to address a couple of the statistical weaknesses in Hattie's work.  These weaknesses, and the fact that they seem to have been largely unnoticed by the many educational researchers around the world who have read Hattie's book, only strengthen my doubts about the trustworthiness of educational research. 


I agree with Hattie that education is an unscientific field, perhaps analogous to what medicine was like a hundred and fifty years ago, but while Hattie blames this on teachers, whom he characterizes as "the devil in this story" because we ignore the great scientific work of people like him, I would ask him to look in the mirror first 


Visible Learning is just not good science”

http://literacyinleafstrewn.blogspot.co.uk/2012/12/can-we-trust-educational-research_20.html


Precis: John Hattie’s maths is wrong, so are the rankings and the idea that learning can be made visible and observed