The idea of unconscious bias exploded into popular consciousness after a psychological test purporting to reveal it was posted live on a Harvard public-facing website. Significant questions remain regarding whether and how unconscious biases affect real world actions, but that has not stopped the test from inspiring thousands to serious internal reflection, or from its principles being incorporated into mandated trainings across the corporate world. The podcast Hidden Brain did a two-part series on the test and some of the studies surrounding it, which was reasonably interesting, but what stuck with me the most was an observation made near the end.
One of the researchers who developed the unconscious bias test bent from psychology into philosophy. In particular, she suggested that, rather than modern bias being reflected in negative or adverse actions directed towards those against whom the bias is slanted, bias is instead reflected in what we would traditionally think of as good deeds. In other words, we are not so much perpetuating bias by keeping someone down as we are by helping someone up. She goes so far as to say that she no longer considers helping an individual a good thing unless it is in such a way as can be replicated across society. It’s not enough to open the library early for your friend’s kid; you must change the whole library schedule for everyone. Otherwise, you are perpetuating bias.
More than anything else in the two-part episode, that final observation has tumbled around in my brain since I heard it. The idea is, on the one hand, intuitive. It’s quite reasonable to suppose that apparent systemic differences in our society result from individual level good deeds which are in general more likely to be done by the majority to the majority, and thereby perpetuate such biases as are already present. From my own, decidedly unscientific, observations, this seems far more likely to explain the statistical disparities which attract so much condemnation and consternation than active, systemic factors keeping certain groups of people down. It also aligns with my ideas of narrative physics, where the behavior of individuals is like quantum physics, which, on the macroscale, give rise to the classical physics of group behavior. Yet, I cannot bring myself to agree with the conclusion that this makes helping individuals without creating a uniform process wrong.
I believe that morality is an individual matter, operating at that quantum level of narrative physics. Principles like virtue is the mean to vices, and conduct is right insofar as you desire it to become universal law, are individual principles of morality. There are moralities which apply to groups, like utilitarianism, but the conclusions of such moralities can lead to deeply uncomfortable places. To continue the physics analogy, the classical physics of group morality should, in my view, arise naturally from the quantum physics of individual morality, which utilitarianism does not.
Even if we establish this as a truth to which we will hold, it does not resolve the issue of what we can perhaps call positive bias or positive discrimination (although I dislike the latter, since discrimination is not intrinsically negative – it is a necessary act of choosing and only becomes a negative or morally questionable when it is based on the wrong factors, however we choose to define those). I volunteer as a STEM mentor for a middle school. If one of the students with whom I work asks for extra tutoring, the idea of positive bias would suggest that I am precipitating or perpetuating a harm by agreeing, since other students would not have that opportunity.
The reasoning given in the podcast is based on the idea that the person I would be helping is more likely to be part of my “in” group, and therefore I would be discriminating against whatever my “out” group is if I don’t develop some way to apply my tutoring universally. However, this is rooted in the grouping mentality of statistics. It is easy to perceive the appearance of a problem when the common statistical groupings are invoked: ethnicity, race, gender, et cetera. Yet, in my example, none of these can be considered to apply except coincidentally. I was matched to this middle school, which is in a completely different part of the country from me, because we both signed up to participate in a program through my work, which was intended to be a one-off event, and became a longer-term mentoring relationship. This rather shows the illusory nature of the invoked problem.
Even if we consider an example that is less random in the selection process, we still find that the problem arises only if the assumption is made that the decision of who to tutor is based on statistical identity sorting groups. For offering aid to an individual to become a negative under this paradigm, it must first be assumed that implicit/unconscious bias affected the decision of whom to aid, and it must also be assumed that the statistical identity groups are valid and relevant in this context. These are significant assumptions on which to build an inversion of classic individual morality.
A key factor in moral decision-making is that most moral decisions are made well in advance of when the decision is implemented. When we decide to help someone, we are doing it based on a decision that helping people in certain circumstances is moral, right, and desirable, made long before the moment when we actually encounter that someone. Thus, it is highly unlikely that it is any implicit/unconscious bias affecting the decision in the moment, or even an alignment of “in” group and “out” group, regardless of how those are defined; rather, it comes down to a simple matter of external probability, with nothing of morality to it.
Please, therefore, do not stop doing what is “good” in the name of reducing bias. Like a monetary incentive for rat tails, trying to manipulate statistical level outcomes through individual behaviors is likely to produce unintended harms. Better to be a moral individual, in whatever sphere, than to refrain from doing good for fear that you may not do enough good.
