Skip to main content

Recently, I was let in on a fascinating email conversation that spontaneously began when two academics began a debate over how their different backgrounds lead them look at the world. The protagonists were regular Significance contributor Michael Lewis and Prof Michael Marder of the University of Texas at Austin. The discussion began when Lewis contacted Marder to ask a question on his paper, the ‘Failure of U.S. Public Secondary Schools in Mathematics’.

In the paper, Prof Marder asks if poverty affects a child’s school performance more than their teachers’ performance. What took place next was an engaging discussion between a social scientist (Lewis) and a physicist (Marder), on their contrasting approaches to modelling a very real issue.

Lewis began with:

As I was reading your paper I had a few thoughts. One is that your argument seems to be based largely on correlations between the proportion of students receiving benefits for low income families and the proportion of students that achieve certain academic outcomes in a school. But one thing that is drilled into us in the social sciences is that correlation is not causality. I guess what I'm saying is that from the point of view of how we use statistical methods in the social sciences, your article seems to downplay some of the complicated statistical issues that are involved.

In his paper, Marder draws parallels between failures in the US education system and the tribulations of the early jet airliner, the De Havilland Comet. He replied using his jetliner analogy:

Concerning the methods of social sciences, it is tempting to imagine the following. We rewind the clock to 1954. Instead of employing engineers to investigate the Comet, the best methods of the social sciences are employed instead. Pilots are assigned randomly to planes and as the crashes mount up, the results are entered into a hierarchical linear model. Of all the factors within the control of the airline, it turns out that after controlling for weather, pilot preparation is the most important determinant of success in the air. They launch a campaign to enable new college graduates to become highly qualified pilots after just six weeks of training. Many new pilots are needed, because with a plane-crash on average each day the supply is running short.

Lewis countered by bringing economic methods into the debate:

I think what may be going on is that, rightly or wrongly, I'm thinking about your article the way we tend to approach things in economics. First, I'm convinced that teacher quality is not a factor in explaining the outcomes you focused on so I agree with you about this. I also think that poverty is the main factor. But from an econometric point of view, your findings could be consistent with something we call omitted variable bias. It could be that spending per pupil and poverty concentrations are positively correlated. In economics if we wanted to estimate the relationship between poverty concentration and I'll just say achievement (as a shorthand), we'd run a regression model to estimate the coefficient depicting this relationship.

Marder then brought the example of a simple electrical circuit to illustrate his point:

I would say that omitted variables are a necessary part of causality. For example, suppose I try to show you that flipping a light switch causes a light to go on. You can respond correctly that I haven't taken into account the fact that the two are hooked together in a powered electrical circuit. Change those things, the causation changes. The flow of current through the wire is an omitted variable. Omitting mention of conductivity is not a systematic error. If a chain of correlation is strong enough in a well-defined context, physical science calls it causality, mainly because the understanding can be used as grounds for action. One way to explain the purpose of engineering is to say that it creates new contexts in which new causal relations become possible. Causation is not immutable. I can rewire the circuit if I understand how it works.

Lewis then admits the differing approaches both were taking to the issue:

I don't think we are anywhere near as good at detecting (if that's the right word) causality as physical scientists are. I guess there are many reasons for this including the inability to run controlled experiments and the inherent complexity of human behavior and institutions. This is the reason for the "correlation is not necessarily causality" concern and for the dominant way statistics is used in the social sciences.

Marder replies with some background to the conclusion he reaches in his paper:

My current understanding is that poverty itself is not the problem, but instead it is a wide variety of phenomena associated with poverty, including adverse childhood experiences, unstable family, housing and other things. Thus the way to address the interaction of poverty and education is not just to dole out cash (which is politically untenable anyway) but to search for the best way to act on these accompanying phenomena. So I agree completely with the idea that hidden variables exist and are fundamental to causality in social settings. What I am not convinced of is that that act of constructing regression models with lots of variables is an adequate response to this basic truth.

Lastly, Lewis conceded that:

I agree with the final point you make that constructing regression models with lots of variables may not be the best way to detect causality in the social sciences. I think considering methods used in statistical and fluid mechanics should join in. What we study in the social sciences is so messy and politicized that I think we need all the help we can get.

What the discussion showed was two perspectives on finding what will happen when you change something in a system, be it in an aircraft engine or a movement in the marginal tax rate. Understanding what causes change and predicting the effects of interventions, is seen as a better way to control systems. Both engineering (drawing from the physical sciences) and policy analysis (drawing from the social sciences) are attempts to realize this vision.

It could be argued that prediction in the physical sciences is so much better than prediction in the social sciences. However, the discourse also shows that this difference may be overstated. Are social scientists really worse at predicting human behaviour than physical scientists are at predicting earthquakes? Can an economist predict a spike in the price of turkeys better than geophysicists can predict when the next ‘big one’ will hit Tokyo or San Francisco? The physical and social sciences may not be so different in this respect.

The above is an edited extract from a wide ranging and longer email conversation that took place between Michael Lewis and Michael Marder.

Leave a Reply

Significance Magazine