AI Political Analysis

How the AI dilemma is a Human dilemma

A couple of weeks ago I was reading this article “What Does Ethical AI Look Like? Here’s What the New Global Consensus Says” by Shelly Fan.

First, I am super intrigued by AI because it is, that we want it or not, one of the most revolutionary things happening these days! The problem is that we have clearly no idea about what the possible consequences are – very much like we are still discovering the consequences of tools like Facebook.

The article analyzes a study published 2 weeks ago in Nature Machine Learning, by a team from ETH Zurich, Switzerland, that takes an in-depth view at AI ethics guidelines around the globe. Even if the study is not a world rapresentative sample, because wealthy nations are represented far more than poorer regions, the findings are quite intriguing.

The most interesting one though, is related to the principles of AI, and to its “ethical” definition. According to the authors, there’s a strong global convergence towards five ethical principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy. Until here, I am not surprised, as they seem fairly reasonable. But the interesting part about this findings is that people can’t really agree on what any of those words mean when it comes to policy.

Sounds familiar? This article intrigues me because we could replace “AI” with “humanitarian and development work” and this article would still make complete sense.

As in the humanitarian context, the disagreement about what these words mean is related to the fact these are abstract, imaginary myths that we have created to put some sort of order to the society we live in. As Yuval Noah Harari explains very eloquently in his book Sapiens, these concepts are entirely imaginary. There is no such a thing as responsibility in nature, not it is a biological characteristic of human beings. So, if we agree that these are myth, imaginary concepts, we can also agree that the tentative here is to make something that is completely invented into a reality.

Why is this relevant? This is relevant because what the search for an Ethical AI is leading us towards, is the inherent contradiction within our system of values: they are mutable, culturally defined, time and location sensitive and ever-evolving.

The definition of an Ethical AI is in fact the search for universal ethical principles because AI will be universal. The example of privacy laws and regulations across the world, and even in between economically similar countries like the US and Europe, is a good one to show how differently the same concept is seen and regulated.

Do I think this means that AI will fail no matter what? No. For me, the reason why AI can be a force of destruction is if we fail to recognize this conundrum. We have to realize that if AI is a reflection of how well we are implementing our own ethical principle of transparency, justice, do no harm, responsibility, and privacy, we are in big troubles.

From the rise of Donald Trump in the USA, the emergence of Populism in Europe, to Brixit, we are not in the position to want any machine to learn from us or to replicate our democratic principles. The problem with Ethical AI is that we are eye-witnessing a moment where our ethical principles are mutating, see the Greta Thumberg movement, the #meetoo movement, the #blacklivesmatter movement. The clash of different ways in which we are looking at equality, justice, do no harm and so on, can only be reflected on AI, but the decision about which one is the right one, and which one we want to govern our lives, is far from being taken.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: