Ethische KI

Ethics & artificial intelligence: What is AI allowed to do? And what is it not allowed to do?

As an AI company, Retresco works in the exciting border area between technology, business and society – an interesting and sensitive topic. In light of the increasing penetration of AI in our everyday lives, the question of ethical and social responsibility has never been more important. Even today, an algorithm decides which air fares we are offered when booking online or which news is displayed in our Facebook feed.

The technological mechanisms behind these algorithms are largely non-transparent and incomprehensible to the average user – a fact that may not necessarily lead to greater social acceptance of artificial intelligence. Other problem areas arise, for example, when AI systems make erroneous decisions or systematically discriminate against certain social groups. Or when governments use technology to fully monitor citizens and massively invade their privacy. It is becoming increasingly clear that the controversial aspects surrounding the topic of artificial intelligence often concern not only the question of what AI can do, but also what it is allowed to do.

Artificial intelligence and problems with acceptance

At regular intervals, studies have shown that social trust in artificial intelligence is not particularly well developed. A recent survey commissioned by the World Economic Forum attracted media attention: according to a dpa report from Zeit Online, 41 percent of respondents answered ‘Yes’ to the question ‘Are you concerned about the use of artificial intelligence?’ According to 19 percent of the respondents, the use of artificial intelligence should even be prohibited. More than 20,000 people in Germany, China, the USA, Saudi Arabia and 23 other countries took part in the survey. The study illustrates the sometimes great reservations people have about AI systems. An increase in acceptance and trust in artificial intelligence is indispensable, but requires an enlightened approach to what AI can and cannot do. And above all: what it is allowed and not allowed to do.

Ethical challenges of natural language processing

AI-based applications in natural language processing – the field of technology in which Retresco is also active – are not immune to possible risks either. In February 2019, the US non-profit organisation OpenAI attracted media attention when the research institute developed an AI-based language model called Gpt2 that analysed texts from around eight million websites – a total of 40 gigabytes of data – which was to then write texts automatically. The researchers themselves were surprised by the results: the AI text generator provided such high-quality texts – an example can be found here – that the researchers decided, due to concerns about the potential of misuse, not to publish the language model or to only publish a slimmed-down version of it. ‘We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,’ comments Jack Clark, Policy Director of OpenAI, to the MIT Technology Review. The main concerns of the researchers were that the system could be misused for the low-cost and mass production and distribution of fake news on social networks.

The scenarios listed here make the question of how to deal ethically with artificial intelligence seem inevitable. What is artificial intelligence allowed to do? Who is responsible for decisions made by artificial intelligence? And last but not least: What influence does artificial intelligence have on our self-image as human beings? These are just some of the questions around which the debate on ethics and artificial intelligence revolves.

Ethical AI: Fair, transparent, value-oriented & secure

The European Commission was the first international institution to address these issues and attempt to establish criteria for a ‘Trustworthy AI’ based on the EU Charter of Fundamental Rights. Under the title ‘Ethics Guidelines for Trustworthy AI’, the High-Level Expert Group on AI – 52 AI experts from science, politics and business – published its 40-page guidelines on 8 April 2019. The following four topics were regarded as central to the development and application of such a ‘trustworthy AI’:

Fairness: In the past, the media often reported incidents in which an AI discriminated against certain groups of people. For example, a Google image search which erroneously categorised people of colour as gorillas, or an HR algorithm at Amazon which systematically disadvantaged female applicants. When such problems arise, the quality of the data set with which the AI was trained often plays a role. These training data can reflect discrimination that already occurs to varying degrees in society. For example, if facial recognition software was trained using only photos of people with light skin, it will have problems recognising people of colour. Avoidance of unfair decisions could therefore be ensured, among other things, by balanced data sets that take social diversity into account.

Transparency & Traceability: The development of transparent and comprehensible AI procedures is a prerequisite for the safe use of AI in many areas. However, in many cases it is no longer possible to understand how an AI system – and in particular those in which neural networks are used – came to its conclusion. In order to avoid this black box phenomenon, results should be prepared within the framework of what is referred to as an Explainable AI in such a way that they are comprehensible and able to be explained to people.

Responsibility: To a certain extent, the use of AI systems calls our conventional value system and the principles of responsibility and liability into question. Until now, decisions and, consequentially, the responsibility of certain actions could always be attributed to specific persons. If, however, the actions of AI systems can no longer (or simply do not have to) be controlled by humans, the question of responsibility should be renegotiated. For this reason, experts are in favour of tightening accountability for AI systems: for example, it should be determined a priori who is responsible for AI systems in order to avoid diffusion of responsibility. In addition, human-machine interactions should enable humans to intervene, to stop or interrupt a system. In a nutshell: many things can be automated, but responsibility is not one of them. Even though more and more decisions will be delegated to algorithms or AI systems in the future, this does not automatically mean that responsibility for such decisions – and thus also for any wrong decisions – should lie with the machines.

Value orientation: AI systems should be human-centred and value-oriented so that they benefit society, the environment and future generations. An orientation towards basic values is indispensable: it should promote freedom, equality, justice, solidarity, tolerance and pluralism and not be used for discrimination purposes or against democratic structures.

Artificial intelligence is one of the greatest opportunities of our time: it can contribute to economic growth, reduce health risks, facilitate our daily and working lives and improve our environment. And precisely because AI systems will significantly influence our present and future lives, we need to define sound ethical values that ensure the development and use of artificial intelligence.