Algorithms as technical entities also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility.
by Michael Sacasas
Image: not credited
[dropcap]T[/dropcap]he title of an article by Virginia Heffernan in Wired asked, “Who Will Take Responsibility for Facebook?”. The answer, of course, is that no one will. Our technological systems, by nature of their design and the ideology that sustains them, are machines for the evasion of moral responsibility.
Heffernan focused on Facebook’s role in spreading misinformation during the last election, which has recently come to fuller and more damning light. Not long afterwards, in an post titled “Google and Facebook Failed Us,” Alexis Madrigal explored how misinformation about the Las Vegas shooting spread on both Google and Facebook. Castigating both companies for their failure to take responsibility for the results their algorithms generated, Madrigal concluded:
There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.
Writing on the same topic, William Turton noted of Google that “the company’s statement cast responsibility on an algorithm as if it were an autonomous force.” “It’s not about the algorithm,” he adds. “It’s not about what the algorithm was supposed to do, except that it went off and did a bad thing instead. Google’s business lives and dies by these things we call algorithms; getting this stuff right is its one job.”
Siva Vaidhyanathan, a scholar at UVA whose book on Facebook, Anti-Social Media, is to be released next year, described his impression of Zuckerberg to Hefferman in this way: “He lacks an appreciation for nuance, complexity, contingency, or even difficulty. He lacks a historical sense of the horrible things that humans are capable of doing to each other and the planet.”
This leads Heffernan to conclude the following: “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability [….] It’s hard to imagine he will submit to truth and reconciliation, or use Facebook’s humiliation as a chance to reconsider its place in the world. Instead, he will likely keep lawyering up and gun it on denial and optics, as he has during past litigation and conflict.”
This is an arresting observation: “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability.” Frankly, I suspect Zuckerberg is not the only one among our technologists who fits this description.
It immediately reminded me of Hannah Arendt’s efforts to understand the unique evils of mid-twentieth century totalitarianism, specifically the evil of the Holocaust. Thoughtlessness, or better an inability to think, Arendt believed, was near the root of this new kind of evil. Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”
I should immediately make clear that I do not mean to equate Facebook’s and Google’s very serious failures with the Holocaust. This is not at all my point. Rather, it is that, following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures.
It is one thing to expose and judge individuals whose actions are consciously intended to cause harm and work against the public good. It is another thing altogether to encounter individuals who, while clearly committing such acts, are, in fact, themselves oblivious to the true nature of their actions. They are so enclosed within an ideological shell that they seem unable to see what they are doing, much less assume responsibility for it.
It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility. As such, however, they are just a new iteration of the moral myopia that many of our best tech critics have been warning us about for a very long time.
This piece was first published on the author’s personal blog on 4th October 2017.
Michael Sacasas serves as the Director of the Center for the Study of Ethics. He earned his MA in Theological Studies from Reformed Theological Seminary in 2002. He is currently completing a Ph.D. in Texts and Technology from the University of Central Florida. His dissertation examines the work of Hannah Arendt and the resources it offers to those seeking to understand the personal, social, and political implications of emerging technologies. He has written about technology and society for a variety of outlets including The New Inquiry, Rhizomes, The American, Mere Orthodoxy, and Second Nature Journal.
Leave a Reply