Precis: Part 1 Claim:I review the evidence that differentiation does not work in Part 2 (to be done over Easter) I show that Carol Tomlinson  its chief protagonist has admitted that she has no evidence that differentiation works despite advising its use.


Differentiation has been an axiom of contemporary educational practice, the advent of cognitive science model of learning” “rooted in independently verified empirical research in contrast to previously prevailing educational theories (e.g constructivism) has raised the following questions and answers below.


1. The definition I’ve used is that  “In differentiated instruction, teachers respond to students' readiness, instructional needs, interests and learning preferences and provide opportunities for students to work in varied instructional formats”

http://en.wikipedia.org/wiki/Differentiated_instruction


I refer you to “Playing the Game The enduring influence of the preferred Ofsted teaching style  By Robert Peal July 2014

http://www.civitas.org.uk/pdf/PlayingtheGame.pdf



Contents

[1] What evidence from scientifically controlled experiments or trials is there that differentiation works? = None.

[2] What evidence from scientifically controlled experiments or trials is there that it does not work? = several studies

including Evidence from Maths – Geometry Randomized Control Trial involving [85 participant schools , students (18,000) and teachers (190)].

[3] How can we explain why it should work or not using the “cognitive science model of learning”?

[4] What alternatives are there?

[5] Information source for the links and summary



[1] What evidence from scientifically controlled experiments or trials is there that differentiation works? = None.


I have not found any research papers following scientific protocols that demonstrate that differentiation works or it is predicted as an instructional strategy from a scientifically evidenced theory.


However so if anyone can provide such a study then I will stand corrected.


[2] What evidence from scientifically controlled experiments or trials is there that it does not work? =


Quote “two papers that are of interest. Neither of them deal directly with differentiated instruction but rather with aspects that closely relate to differentiation large”


[1] The first paper is a large scale correlational study of TIMSS data from the US. This shows that a greater proportion of lecture-style teaching was associated with higher performance on the TIMSS assessment.


[2] The second paper is a randomised controlled trial from Costa Rica [these are the gold standard in science research].


It is particularly interesting because the results obtained are the opposite of what the researchers were hoping to find.


A traditional teaching model was pitted against various levels of an innovative teaching strategy that allowed students the opportunity to discover and explore.


The intervention groups differed in the amount of technology – laptops etc – that was available.


Of course, this isn’t quite the model of differentiation that I have outlined above but it is an attempt to personalise the learning.


To their obvious dismay, the researchers found that, “the control group [No differentiation] learned significantly more than any of our treatment arms.”


[3] I was also linked to a few things via Twitter. This paper offers a null result and was brought to my attention by @Rokewood. No relationship was found between, “time spent lecturing in front of the class and student performance.”


[A free copy of the above [3] paper can be found on the authors site here  - THE LINK HE HAS GIVEN A ABOVE IS A ABSTRACT AND A PAYWALL]

http://www.tierweb.nl/assets/files/UM/Lecturing_styles_TIER_working_paper(1).pdf


The first paper  link above is particularly interesting as it tested not only the types of intervention but the principle that it works by having a large sample size and a control group that received no differentiation

[85 participant schools , students (18,000) and teachers (190)].

https://editorialexpress.com/cgi-bin/conference/download.cgi?db_name=NEUDC2013&paper_id=229


Quote Pedagogical Change in Mathematics Teaching: Evidence from a Randomized Control Trial

As a recent study from the European Commission highlights (Eurydice, 2011), in order to achieve mathematical competence a common practice pursued by many countries is to give students a more active role in the generation of knowledge. “Moving away from the traditional teacher-dominated way of learning, active learning approaches encourage pupils to participate in their own learning through discussions, project work, practical exercises and other ways to help them reflect upon and explain their mathematics learning” (Eurydice, 2011, p. 56)” see page 2.


…In this paper we report the results of an experiment with seventh grade Costa Rican children designed to improve their ability to think, reason, argument and communicate using mathematics.
We created a structured pedagogical intervention that allowed students the opportunity for a more active role in the classroom.

We designed and implemented an experiment with seventh grade students in Costa Rica that blends a modern curricular approach with technology for teaching geometry (one of three units of the seventh grade program or about three months of teaching).

We randomly assigned the 85 participant schools in this experiment to one of five conditions:
(1) status-quo (i.e., control);
(2) new curriculum design;
(3) new curriculum design and an interactive whiteboard;
(4) new curriculum design and a computer lab;
(5) new curriculum design and a laptop for every child in the classroom.

All students (18,000) and teachers (190) in the seventh grade of these schools participated in the experiment.


We find that the control group [the status-quo ] learned significantly more than any of the four intervention groups.

The students using the new curriculum without technology learned about 17 percent of a standard deviation less than the status-quo.
Learning was around 36 percent lower in the one laptop per student schools compared to control establishments.


In the race between the three technologies (i.e., keeping the pedagogical approach constant) the interactive whiteboard is the one that fairs slightly better. We find that the best students were harmed the most by this intervention. Concurrently, their behavior deteriorated and they were less engaged with learning mathematics.”


The authors then blame this outcome to the teaching in the differentiated schools not being adequate and suggest that if it had then the outcome would be different this contradicts their own data and as Harry Webb points out in blog post below this was because they were looking for evidence for differentiation which would account for the bias in their conclusion at variance with the data which


the clustered randomized design ensures neither schools, nor teachers, nor students could have selected into the treatment. Furthermore, the fact that all teachers and all students participated in the experiment rule out other sources of possible biases. Indeed we showed that the experiment had perfect compliance, was internally valid and implemented on a large representative sample of schools. That is, this is not a result of a small experiment on a bizarre sample”.


they themselves admit they used   “psychometrically valid test which was designed to measure not only the basic concepts but also higher order skills (that we expected the intervention would foster) We found that the treatment groups [differentiated] performed worse than the control [no differentiation] in both learning dimensions.”



[3] How can we explain why it should work or not using the “cognitive science model of learning”?


The Cognitive science model of learning does not support or even mention differentiation, nor can it be predicted or deduced from it. Human beings all learn the same way regardless of their preferences their internal mental processes are the same (see http://sciencesite.16mb.com/page57.html).


It does support starting from wherever their long term memory on a subject ends, for many GCSE topics they have no prior knowledge (long term memory) whilst others for example Renewable Energy they will have some from KS3.


It won’t work according to the model because by omitting information or knowledge in order to increase the difficulty for a student simply increases cognitive load which makes it harder to learn.


Nor is there any evidence for active or passive learning with the former being better, this came from the learning pyramid which has been thoroughly debunked here – see [No 7] http://sciencesite.16mb.com/page57.html


Conversely by providing lots of prompts simply reduces the need for them to memorise the knowledge and gives the appearance of learning in the lesson without building the students long term memory.


[4] What alternatives are there?


Allow students to set their own pace and challenge and give them the time to memorise the content (long term memory) so that they can think about it several weeks later in an activity or a test.  Learning a little well according to the CS model makes it much easier to learn new material, rather than differentiated activities which give the appearance of learning but do not build long term memory.


The science GCSE exam does not differentiate  for students other than tiers of entry (which may well disappear) foundation papers for example intertwine easier topics with harder ones in the same question for example generating electricity through induction (electric motor design) with renewable sources this causes many student who had focused on the easier latter to come unstuck.


Ideally remove rota systems and only move students on once they have mastered the topic at hand free of misconceptions, at present rotas are used understandably to cover the content in the time available but students time requirements to thoroughly learn / memorise the content are longer than 3 or 4 weeks for a typical topic. This  is especially  true when they are novice learners or have not done well in the subject before as their long term memory is much weaker and the CS model predicts this will mean it will take them both longer to learn new content and they will find it much more difficult to do so.


Provide students with teacher written notes on the topic to reduce cognitive load and increase encoding (Concepts, facts , relationships and sequences (aka skills)). Textbooks simply do not do this and are therefore hard to understand for these types of students. (For the rest please refer to the TLA done last year.)


[5] Information source for the links and summary


All credit to Mr.Webb for publicising the studies and arguments questioning differentiation

https://websofsubstance.wordpress.com/author/webby101/


The evidence on differentiation

Posted on July 1, 2014 by Harry Webb

There are two reasonably sound ways of mounting an argument. Firstly, you can use empirical evidence; data from studies and trials.

Alternatively, you can use reason and mount a logical case for the position that you are adopting. I like to think that I do a bit of both on this blog. My recent post about differentiation was mainly a reasoned argument but it is worth noting that it was in response to an article by UK education minister, Liz Truss, which in turn was prompted by the release of the results of the TALIS survey. This is a survey carried out regularly by the OECD of teacher practice in different countries. There is a useful summary on the UK education department’s website. The key passage is probably this:


“Teachers in England are also much more likely than teachers in most countries to give different work to students with different abilities (‘differentiation’). 63% report doing so often compared to 32%, on average, in high performing countries.”


[Also my addition The proportion of teachers who use group work ‘frequently’ or ‘in all or nearly all lessons’ is 58% in England, 61% in low performing countries and 25% in high performing countries. Also, high performing countries use considerably less pupil self-assessment than England, and less project work and ICT ]

(DfE, p. 148)


This is what prompted speculation about the efficacy of differentiation.

Let me summarise my main point. I am not convinced that planning different activities for students to complete in the same class is worthwhile. Such planning is burdensome on teachers. Enacted in class, it spreads teachers thinly as they monitor progress of different groups on a range of tasks and there is little evidence that it is effective. It also raises some worrying equity issues. This does not mean that I am against all differentiation in a broader sense. For instance, I would target verbal questions to different students in my class.

I received a little flak on Twitter for my post and so I’d like to respond to some of these comment.

Firstly, @HeyMissSmith took issue with my perceived lack of interest in children; “We all: secondary & primary teach CHILDREN not content; this should be our focus, nothing else, this is the key to becoming effective.” However, I’m not sure that we are actually that far away from each other as Miss Smith also points out that she does not spend much time planning for differentiation.

But let’s examine this point. If we accept that our role as teachers is to teach children – and who wouldn’t? – then the question has to be asked as to what we are going to teach them. It is a powerful rhetorical device to state that we ‘teach children not content’ but, on examination, these two alternatives are clearly not exclusive. Unless you are quite radical in your approach and eschew the transmission of any kind of knowledge then you are in the business, to some extent at least, of teaching children content. I am interested in the most effective ways of doing this. This does not make me a child hater.


Another commentator, @JohnClarke1960, took issue with me at some length. At first, he seemed to think that I wasn’t a teacher and had a bit of a go at my credibility. In fact, he seems to be mainly interested in who makes a particular claim and whether or not that person is a teacher. I am not interested in Ad Hominem and its reverse; argument from authority. I am interested in whether an argument is correct or incorrect and so I tend to ignore this sort of thing. However, John also made three points worth responding to.


Firstly, he accused me of assuming that students are all the same, “Drivel disguised as academic thought; u assume all kids r homogenous & all learn exactly same stuff the same way.” I don’t assume this. Clearly, students are different although I am not convinced that they learn things in different ways. However, simply recognising difference does not mean that we have to adopt a certain way of teaching, especially if it is not a very effective one. It may well be the case that, whilst different, all children will benefit most from whole-class teaching. In this case, it would be in their interests to do whole-class teaching, whatever our beliefs about valuing the individual.


Secondly, John suggested that differentiation is good practice because a lot of teachers do it; “It’s called professional good practice because 1000s of professionals do it after reflecting on their practice.” This is an interesting argument. However, it is quite possible for lots of professionals to be wrong or, at least, a little misguided. For instance, bloodletting with leeches used to be a consensus position in the medical profession. It isn’t any more. The fact that lots of people subscribed to it didn’t make the practice any more effective. It’s also a kind of circular reasoning; differentiation is good practice because it’s what we as a profession define to be good practice.

[PLEASE SEE THE DEBUNKING OF LEARNING STYLES No 4 HERE on my website http://sciencesite.16mb.com/page14.html]

It is easy to see why such an idea holds great currency among practising teachers. When I was training as a teacher, we spent very little time on classroom management or subject knowledge and an awful lot of time on differentiation. Many teachers simply assume that it must be the right thing to do because this is what they were told during training. I still remember the guilt that I used to feel about not differentiating ‘properly’. I have always inclined towards whole-class teaching but in the early part of my career it never occurred to me that this might actually be more effective than the complex differentiation that I was failing to do. I just assumed that my lecturers must have been right and I felt inadequate.

John’s final point has some substance. He criticised my lack of empirical evidence against differentiation. I did mention TALIS a few times on Twitter but I can see why he might have missed this given that it wasn’t mentioned directly in my original post. When looking at the evidence, we need to bear in mind a few things. Differentiation is burdensome if, by differentiation, we mean the preparation and use of multiple simultaneous tasks within one lesson. Therefore, this is not a zero-sum game. To justify its use, we need to see clear benefits of differentiation over the less burdensome whole-class teaching. If both approaches are equally effective then whole-class teaching may well be better because it frees teacher time to focus on other issues such as assessment or planning more generally.

In support of differentiation, John offered this study. It has some impressive elements for an education study. The number of participants is quite large: 490 students in 24 Year 4 classes. However, the control is weak. The teachers who used differentiation were not randomly selected; they were volunteers. The overall effect is also quite small for such a study; the effect size for a comprehension test is just 0.31. It is also slightly odd that the researchers openly state that the objective of the study was to find evidence to support the theory of differentiation. When you take into account the lack of a proper control and the likelihood of a Hawthorne effect then I think we are seeing pretty much no evidence at all.

For my part, I can offer two papers that are of interest. Neither of them deal directly with differentiated instruction but rather with aspects that closely relate to differentiation.

The first paper is a large scale correlational study of TIMSS data from the US. This shows that a greater proportion of lecture-style teaching was associated with higher performance on the TIMSS assessment.

The second paper is a randomised controlled trial from Costa Rica. It is particularly interesting because the results obtained are the opposite of what the researchers were hoping to find. A traditional teaching model was pitted against various levels of an innovative teaching strategy that allowed students the opportunity to discover and explore. The intervention groups differed in the amount of technology – laptops etc – that was available. Of course, this isn’t quite the model of differentiation that I have outlined above but it is an attempt to personalise the learning. To their obvious dismay, the researchers found that, “the control group learned significantly more than any of our treatment arms.”


I was also linked to a few things via Twitter. This paper offers a null result and was brought to my attention by @Rokewood. No relationship was found between, “time spent lecturing in front of the class and student performance.”


And as I have pointed out before, there are different kinds of lecturing. I, for instance, constantly ask my students questions as I talk to them from the front of the class. This is clearly different to a non-interactive presentation.

What none of this amounts to is compelling evidence that we should be spending vast amounts of teachers’ time in planning differentiated activities. To justify such an approach we would need strong evidence of the kind that is simply not available. Unfortunately, it seems that such a model is based more upon a kind of philosophy about putting the children first – the slogan ‘we teach children not content’ – than anything grounded in substantial research.

And yet, thousands are, as I write, are training to be teachers and learning to feel guilty about not differentiating properly.