100 Years of Education Trials: No Significant Difference?

Last week, Jo attended an event celebrating the centenary of the first reported randomised controlled trial (RCT) in education – “100 years of education trials: no significant difference?”.

One of the panellists described the event as “the Glastonbury of experts in educational RCTs“.

The event, which was jointly hosted by the Social Statistics Section of the Royal Statistical Society and the National Foundation for Educational Research (NFER), involved leading speakers from organisations involved with commissioning, conducting and reporting RCTs in education, such as the Education Endowment Foundation (EEF), the NFER, the Nuffield Foundation, and a number of universities.

RCTs have been used widely in medical and pharmaceutical studies for many years, with much success, and are seen as the ‘gold standard’ methodology for assessing drugs and treatments. The field of education research has been slower to adopt RCTs, though this is changing; the majority of RCTs have been published in the last 10 years. The suitability of this methodology for use in education is still contested. The debate at the event covered some of the history, the challenges and the successes of conducting trials in educational settings, with a view to determining how these RCTs can be improved and refined in the future.

With many trials resulting in effect sizes that are considered small or in no evidence of an effect, some may wonder whether they are telling us anything useful. Reasons for such results may be related to EEF’s policy of publishing all findings regardless of outcome, which reduces publication bias (where published results are skewed in favour of positive findings), and would also reduce the average effect size.

A key challenge is implementation. If schools find it difficult to fully implement the initiative (changing setting and streaming practices is perhaps more involved than taking a new drug, for example) any effects are diluted. One speaker talked of how, irrespective of the statistical results of the outcome evaluation, the process evaluation (exploring how, and how well, schools implemented the initiative) is valuable in helping teachers and practitioners understand what was difficult to implement and why.

Reflecting on the event, Jo said “while the concept of RCTs is elegantly simple, implementing them in educational settings and the social sciences is far more complex. It was great to hear about the latest developments and thoughts for future improvements to RCTs.

To find out more about how we can help you with analysing data for education research, visit our Education Sector page.