Algorithmic Ethics

“Algorithmic ethics” seem to be everywhere, and many are seeing it in a troubling light. The technical dimensions seem to lay at the heart of the issue, but there are actually deeper ethical quandaries underneath.

Most of what gets called algorithmic ethics falls into a straightforward category that I call technical.

There’s the Bertelsmann Stiftung project called — appunto — “Algorithmic Ethics”, and it serves as a good introduction. It aims to raise public awareness about the relevance of algorithmic processes, structuring as a fact-based debate and action-oriented analysis of “the problem,” and testing “solutions” (presumably to whatever problems it identifies). Considering the widespread criticism the foundation has received — including, for example, regarding its undemocratic operations — and its evidently technocratic analytical approach to this problem, I wonder if BS might not be full of, well, BS, or in any case swimming too swiftly in its own algorithmically ordered operations to see the water around it.

Then there’s the non-profit Algorithm Watch. There’s the EU project algoaware, a private-sector response to a call issued by the EU precisely to “support algorithmic awareness building.” (Note for algorithmic ethics aficionados! It has good collection of events and limited but intelligent bibliographic resources.) There’s a new subfield of journalism dedicated to algorithmic accountability reporting. Algorithmic ethics is being incorporated into university courses and research on big data and the information era. Even the philosophers are talking about algorithmic ethics at this technical level. And there’s a special issue of Philosophy & Technology dedicated to the governance of algorithms.

Photo by  Marco Secchi

Photo by Marco Secchi

The first problem: It’s technical

The problem — or the first problem—is that all these approaches to algorithmic ethics focus on the low-hanging fruit. And, frankly, there’s plenty here to keep everyone’s bellies full for a while: racial bias in policing algorithms, gender bias in twitterbots, income bias in automated loan decisions, and so on. This approach to algorithmic ethics is, well, technical. The “solution,” as it were, is to strip all those biased inputs out of the code and out of the system’s relational environment. Easier said than done, sure — but it’s still straightforward enough.

A more penetrating dive into algorithmic ethics brings us to contents even more troubling, though, and it’s these that I want to focus on.

The second problem: Algorithmic humanity

Understanding algorithmic ethics only through its technical aspect misses something important. The more vexing consequence of the algorithmic explosion is that it modulates us. We — people — become ever more algorithmic in our own thinking, acting, and being: an always more algorithmic world is transforming the human identity and experience, the human person herself.

It’s already many years that philosophers have been worrying about la technique and what it makes of us — Illich, Ellul, and Marcuse are just a few in a long list of critics to choose from. I see it everywhere: medical schools engrain decision-tree logic to produce doctors incapable of “smelling out the problem” (and much less of practicing compassion while they do it); making just can’t proceed without a prior cost-benefit analysis and input of constraints, domains, and hierarchies into the decision-maker’s brain; even love comes with a how-to listicle.

“But what of the person who has himself been swallowed by the world conceived as a system?”

Ivan Illich, The Age of Systems

At this point in history, the pervasiveness of the technological algorithm seems to make an algorithmically managed and directed existence, a sort of mechanistic human puppetry, appear as the only sensible option for efficient human operation — and at this point, in this system, what other option is there? Anything else would be literally irrational — that is, not ordered by a law of efficiency and not governed by a procedure of probability. As Lewis Gordon says, it’s not always reasonable to be rational: who would want to live with someone whose being and behavior were always and everywhere predictable? If that were the case, what about those famous virtues? Hope. Mercy. Forgiveness. Love. There’s nothing rational about any of them. And yet they are what make us people.

Does hope come with an algorithm?

  • post by Leah M. Ashe