Here's an argument, of sorts: Donald Trump was elected because there's too much political correctness. Hysterically policing other people's speech just is provocative and wrongheaded and makes people do bad things; common sense and a healthy respect for liberal toleration tell us that.
Can science explain how we got here?
Writing in the Guardian earlier this year, social psychologist Jonathan Haidt laments the effects of 'concept creep', whereby the meanings of clearly defined terms are "'defined down' so that they are applied promiscuously to milder and less objectionable events." For example, 'bullying' and 'trauma', once rigidly defined, now a apply to a much broader and, Haidt says, milder array of phenomena. And that's a problem, because students have consequently become intolerant of a broader range of behaviours and speech acts that are now seen to comprise 'bullying' and 'traumatising'.
What, precisely, is wrong with 'concept creep'? That depends on what, if any, extent you feel that the expansion of these terms is harmful. Calling the phenomenon 'concept creep' adds no normative weight whatsoever to the question of whether expanding our conceptions of harm is helpful or not. Nor is it obviously descriptively useful. After all, activists focusing on what they call the trauma relating to, for example, 'black pain', will be the first to proclaim that they are expanding common terms of reference to reveal unacknowledged social realities.
It may be tempting to question whether science is up to the task of explaining our complex world after all.
Redescribing a phenomenon with a neologism neither makes it true nor lends it any particular significance. What Haidt is doing is invoking the prestige of science to confer authority on an opinion.
Whom is that authority available to? It's worth thinking about at a time when the world feels unpredictable, and our trust in authority is waning. I would suggest that what appears to be a crisis of science is in fact a crisis of authority.
Economists who failed to anticipate the 2008 financial crisis, pollsters who couldn't see Brexit coming, academic psychologists whose findings cannot reliably be replicated, pharmaceutical companies that selectively report evidence. It may be tempting to question whether science is up to the task of explaining our complex world after all.
But careful scrutiny of scientific enquiry, often by practitioners of the disciplines scrutinised, reveals the opposite. If economists neglect the rigorous empirical work needed to properly ground their assumptions, and are slow to reject theories that have been falsified, and if psychologists ought to apply more exacting statistical protocols to their research, the lesson is not that science has failed, but that we have failed to perform proper science.
Of course, we can err in the other direction; psychology and economics are not physics, and the best practices of one domain may not be useful in another. And sometimes what looks like rigour is unproductive reductionism. But if there is, at least, some evidence that our most prestigious and influential technical disciplines do not always live up to the highest scientific standards, it's worth asking how they have acquired the authority to determine public policy without ongoing public scrutiny.
How many of us laughed at the young student who demanded that 'science must fall' while nodding along with the radio psychologist explaining away the mysteries of the soul by way of the 'statistically significant' findings of a study that had as its sample a handful of Ivy League teenagers?
The problem with technocracy isn't the drive to quantify. It's the risk that quantification becomes its own justification, and in so doing, overreaches in a way that undermines its own efficacy. That also makes technocracy useful to those who wield it; there is no alternative but to do what the data tells us to do. It's no accident that the technocratic imagination cannot envision a better, broader world than the one created in the image of elites.