While reading the book Mindware. Tools for smart thinking (link to review) by Richard Nisbett, I came across a couple of nice paragraphs worth citing. Nisbett is reasoning on incentives and discusses the effects of the so-called loss aversion (humans abhor losing an opportunity). I am interested in the paper since it deals with an incentive for teachers which is completely material, i.e., dear old money itself. However, it is interesting that money by itself would not suffice. Read on:
In this paper, we demonstrate that exploiting the power of loss aversion–teachers are paid in advance and asked to give back the money if their students do not improve sufficiently–increases math test scores between 0.201 (0.076) and 0.398 (0.129) standard deviations. This is equivalent to increasing teacher quality by more than one standard deviation. A second treatment arm, identical to the loss aversion treatment but implemented in the standard fashion, yields smaller and statistically insignificant results. This suggests it is loss aversion, rather than other features of the design or population sampled, that leads to the stark differences between our findings and past research.
The paper itself is:
Roland G. Fryer, Jr & Steven D. Levitt & John List & Sally Sadoff, 2012. “Enhancing the Efficacy of Teacher Incentives through Loss Aversion: A Field Experiment,” NBER Working Papers 18237, National Bureau of Economic Research, Inc.
Link to paper (pdf). Link to summary.
Now, when I read such “studies” I can’t but be reminded of Neil Postman’s words about social science: “I do not believe psychologists, sociologists, anthropologists, or media ecologists do science”. Well, I am joking here, but the Postman quote is real and full of sense (from a paper titled “Social Science as Theology“, Etc., Vol 41(1), 1984, 22-32.).
What do we make of this? That teachers are but pawns in the midst of a field dominated by forces they do not even see or control? Aren’t we all. However, this sounds really spooky and I resist the linking of the concept of teacher quality with the results of students’ test scores:
This is equivalent to increasing teacher quality by more than one standard deviation.
I believe this is misleading and I really don’t like the word “science” being applied to studies of this kind: if nothing else, because there is no causal link at all, just a correlation. If I add two plus two together and remember that in some institutions administrators request “personal retention rates” for each teacher’s classes and tie those with teachers’ “performance”, then I must conclude we are deeply wrong. The analytics movement, with all its good faith and interesting proposals may be but another route to simply more control, without serious implications to students’ learning.