It’s been several months since Microsoft started an attempt on Twitter in the field of artificial intelligence. They programmed a chatbot called Tay and linked it to a Twitter account. The chatbot couldn’t do much except learn. With each interaction, he learned more from the human users with whom he communicated. Shortly before Microsoft switched Tay off again, the AI had turned into a racist and Hitler understander.

The neutral program, which became a Nazi within a few hours of interacting with people on Twitter, can serve as a prime example of two fundamental problems of AI. The first would be neutrality, or rather the lack of a moral compass that most people carry in themselves. The algorithm doesn’t distinguish between right and wrong, it can’t at all, because it doesn’t understand the concept behind it after all. If an algorithm regards something as “immoral,” it is because one of its programmers thought of tagging it as “immoral. The second problem is even bigger, because artificial intelligence learns from human intelligence. The demand that we should first develop human intelligence before we try to teach it to computers is quite true.

Amazon also had to learn just a few weeks ago that you can’t rely on AI alone. The concern used an algorithm to pre-select applicants, which decided which applicant should ultimately be invited for an interview. The AI did this for so long until a human employee noticed that for some reason suddenly only men appeared for the interviews. The reason was quickly found, the algorithm had simply decided at some point in its learning process that women inherently weren’t suitable for the advertised positions.

With other algorithms, racism is even obvious at first glance, for example in predictive policing. In this case, the AI is fed, among other things, information about crimes, criminals and crime scenes. From this, it then creates predictions as to where crimes are most likely to occur. At first glance, this is a sensible thing to do, since the best crime is a prevented crime. However, for some software used, criminals were about 99% black. A similar problem was experienced in the USA with the early facial recognition programs, which regularly failed in black people. The reason? They had been fed almost exclusively with example photos of white people. Can one then accuse the algorithm of racism? And even with the programmers you can notice that they didn’t even make their mistake intentionally.

The problem is not made any easier by the fact that the learning processes of the algorithms take place in a black box. In short: The programmer feeds the AI with the contents, but he can no longer comprehend how and why conclusions are drawn, but only can judge the result. In the cases described, the results were frightening, but reparable. It will only be a matter of time before AI can decide about much more than an invitation to an interview.

In 1942, the Russian-American scientist and author Isaac Asimov already had the foresight that rules were needed here. With his robot laws he created them:

  1. A robot must not injure any human being or by inaction subject a human to harm.
  2. A robot must obey the orders given to it by a human – unless such an order would collide with rule one.
  3. A robot must protect its existence as long as this protection does not collide with rule one or two.

Asimov created these three laws for a short story. And they have shaped the world of science fiction. Sometimes they are more or less adopted, other authors even refer directly to Asimov in their works. And like many things in science fiction, Asimov’s three robotic laws were also incorporated into the actual science. At least they enjoy – thankfully – continued popularity among scientists in the fields of artificial intelligence and robotics.

However, the question now arises as to whether it does not take a whole code of law before an algorithm can be let loose onto humanity. A collection of rules that prevent discrimination without hindering the algorithm in its work. Alone, is that even possible? In our efforts to fight prejudices, we sometimes forget that prejudices can also be pre-judgments. Each of us makes these pre-judgments. The fast unconscious division into drawers makes our daily survival possible. Not even the human brain would be able to cope with the otherwise threatening flood of information.

But how can we succeed in creating an algorithm without prejudices if we cannot even eliminate our own prejudices?

You liked this article? You can support us with PayPal!

Leave a Reply

<br><br><br><br>Durch die weitere Nutzung der Seite stimmst du der Verwendung von Cookies zu.<br><br> Weitere Informationen

Die Cookie-Einstellungen auf dieser Website sind auf "Cookies zulassen" eingestellt, um das beste Surferlebnis zu ermöglichen. Wenn du diese Website ohne Änderung der Cookie-Einstellungen verwendest oder auf "Akzeptieren" klickst, erklärst du sich damit einverstanden.

Schließen