Humans and Machines, Empathy and Indifference

September 8, 2020

By Ian Allen

Seventy years ago, Albert Camus complained about the increasing cleanliness of the times: “We make love by telephone, we work not on matter but on machines, and we kill and are killed by proxy.” 

Today, the trend continues. Whether weapons1 or computers (or even love, it seems), progress has often been that which increases distance, decreases responsibility, and often destroys intimacy and understanding. We kill from afar and work from afar, we watch from afar the struggling hospitals and neighborhoods, filtered and curated through screens that lack touch, smell, taste, feel. We’re down to a single sense, viewing and reading what the machines present to us, increasingly virtuous and clean and self-righteous.   

This process (if not the result) was by design. At the halfway point of the twentieth century, we could be forgiven for wanting to engineer out man’s passions and prejudices; to free processes and decisions from flawed human reasoning; to free man from certain monotonous indignities so we may fulfill our individual promise; to let the data speak an objective truth for itself. To this goal, Camus was not unsympathetic (suspicious, though, as he was of perfection). But his point was: let us not think we’ll solve or automate the problem (that which causes injustice, cruelty, unrealized promise) through either ideology or technology alone. What really matters is empathy.

Empathy, as Merriam-Webster has it, is “the action of understanding…being sensitive to, and vicariously experiencing (emphasis added), the feelings, thoughts, and experience of another…”  What’s interesting is how human that definition is, how it could in part define consciousness itself. Theory of Mind, after all, was man’s first Theory; an unintentional and unknowing hypothesis about the original other, vicariously experiencing.

Machines, to state the obvious, are terrible at this. Humans, to perhaps understate, are sometimes terrible at this. But machines are bad at empathy because indifference is an inherent and singular condition. Humans are the opposite, and it’s instructive to consider words that Merriam-Webster lists as near antonyms of indifference: prejudice, attentiveness, curiosity, bias, vehemence, zeal, warmheartedness, desire. Our faults and strengths run dimensions. It begs the question: what are we, and what are computers, capable of? What are we, and what are computers, good at?  For our purposes here: how do we leverage the power of big data analytics to empower institutions and help humans improve society?

It’s been oft-noted that the relationship between humans and computers has inverted over the past couple decades. For most of the history of computing, humans would do the load of the work and consult the computers for verification or for particularly challenging components of a problem. Now it’s the reverse; computers are constantly learning, constantly processing nearly unfathomable volumes of information, and only checking in with the human for the particularly challenging components of a problem. And what are these challenging components? Questions of perception, morality, feelings; examples of abject racism, cruelty, and violence. This is why Facebook and YouTube – companies with all possible resources and incentives to automate this process – still employ many thousands of human content moderators: because computers still have no idea when even the most deplorable imagery is a violation of Terms of Service. Context and empathy exist, still, beyond the reach of technology. 

Consider how the process works: human moderators view a piece of content. What happens next depends on an immediate reaction; this thing is either offensive or it is not. Granted, there are plenty of gray areas. Opinions differ and much can be ignorantly offensive or just inappropriate. Different moderators will often come to different conclusions. But the exceptions prove the rule: even the most obviously objectionable content will leave the machines stumped. Yet for the human, there’s a visceral reaction that says, That’s just wrong.  

It’s fair to acknowledge that That’s just wrong has its problems. One can see Camus’ technologically hopeful contemporaries indignant at the suggestion that man has any sense of what’s just wrong. You can see them standing among the rubble of Berlin or Paris, casting a hand around them, saying (mocking Wren’s epitaph), If you seek man’s monument, look around.  They’d ask: we are to trust man’s sense of right and wrong? They’d say: the machines will work without hatred, they will just do math and run regressions and spit out answers that don’t care about your faith or philosophy. Humans are the problem, the chokepoint, the eye of the needle through which the machines – say, just one cluster (of countless) on Amazon Web Services – would pass the knowledge of three million books per day if only neurobiology could keep up.  

Well, yes and no and that’s not the point anyway. First, applied technology is not an exercise for the sake of itself. It’s a tool for us to ameliorate suffering and improve the human condition. If the data isn’t applicable and useful, then it’s pointless. Second, technology’s indifference can be as cruel as hatred if not supervised. Like our content moderator, we are reliant on a human to evaluate the product. Often, the human reaction will be either, nope, wrong, go back and look at the inputs (e.g., are the data geographically and demographically inclusive and representative?), or, what uncomfortable thing did I just learn about injustice or cruelty or inequality?  

Further, an analytic result is not always an answer. It’s an insight. It’s high quality, potentially revelatory, clinically cold data that must meet with empathy for a decision. But it’s not an answer in and of itself. It does not free us from responsibility. It’s not the end of the road, but rather a tool to make the way increasingly clear. Outsourcing our decisions to the data is seductive; it seems to relieve us of responsibility and allows us to point to something to say: there, that is why. Imagine the case where algorithms are used to guide decisions about parole: It’s not me denying freedom, it’s the machine. I’m just following the data.

If so, you’re using it wrong. If there’s one sure rule in leadership and decision making it’s this: we must feel some of the pain we inflict in our decisions lest they (decisions about killing, for example, a drone strike) become too easy. We must never forget that we alone are responsible. We thus respect and admire those who have to make hard decisions because we know the weight they should justly feel when there are no good options and every decision will have a cost. It’s only inaction that we detest, the abdication of responsibility that’s far crueler than the indifference of any machine because we, at least, should know better. 

However, for those leaders who seek responsibility, the remarkable power of ethical data can help show us the way forward. For eight months now, we’ve seen how this can be done. Working with researchers and scientists from Harvard, Yale, and many others, we’ve seen how to build the analytics and follow the data, and – when we find something in the data that demands context – how to stop and pause and evaluate what we just learned. The same is true for many things, from COVID-19 to climate change to equality. We have the tools. We now only need to recall what we’ve always known: empathy matters.

___________________________________________________
 1I wrote the ‘weapons’ version of this essay, here: https://www.thecipherbrief.com/column/book-review/getting-to-the-truth-about-cias-paramilitary-operators (paywall)

visit Social Distancing Reporter →

← back to all posts