Is artificial intelligence dangerous to humanity 1

Why artificial intelligence is not (yet) a threat to people

Does artificial intelligence pose a fundamental threat to humanity? Tesla boss Elon Musk is convinced of this and has been warning for years with his own vehemence about the dangers of unchecked research in this area. And he is by no means alone in this position: for example, shortly before his death, the famous physicist Stephen Hawking emphasized that although AI research offers huge opportunities for mankind, the risk of abuse should not be underestimated either - and to that extent strict control is required.

unanimity

So is it to be feared that this will open up the next line of conflict between state authorities and companies doing research in this area? Not necessarily, as some of the big players in this area agree with the demand for clear rules of the game. At Google, one of the busiest companies in the field of artificial intelligence, they even go one step further: In this area, clear international regulations are needed that determine what is allowed and what is not, emphasized Jen Gennai, Head of Ethical Machine Learning at Google. recently at the "Making AI" event in Amsterdam.

A few months ago, Google committed itself to a series of "AI principles". The aim here is that research must generally serve social progress; basic security and privacy rules are also included in this set of rules. In addition, no AI research for weapon systems should be carried out - a regulation that specifically led to the end of a previously operated cooperation with the US Department of Defense. In addition, Google promises to make sure that the AI ​​acts as impartially as possible.

No replacement

And yet Google emphasizes that this cannot replace state rules. It is up to the governments - as soon as possible - to work out a set of rules on an international level. Of course, Google could simply sit down with other companies active in this area such as Amazon, Microsoft or Tencent and agree on a minimum consensus. But that's exactly what you don't want. Such rules should be determined by society and not by individual, powerful corporations, says Gennai.

Basic rejection

Jeff Dean, head of the entire AI department at Google, goes one step further: There are certain areas that one should generally stay away from - one of them is the development of autonomous weapons. In other areas, clear regulation is needed, for example, currently this is the question of what should be allowed with facial recognition.

In addition, Dean emphasizes another point that he is neglecting in the current discussions: the question of the effects on the labor market. A prime example of this are self-driving cars: thanks to artificial intelligence, they could save the lives of millions of people every year, but at the same time they will also cost numerous jobs - there is no getting around that. As a result, the truck driver's profession will largely become obsolete, and the prognosis for taxi drivers does not look much different. It may take a few years before everything really works out, but that's why it is now time for society to ask itself how to take care of the people affected by it.

AI is only just beginning

With all this, the company also emphasizes how far what we call artificial intelligence today is still from what the general public expects from the term. A takeover of world domination by artificial intelligence is currently not possible, if only because it is currently quite simple, as Olivier Bousquet, Google's AI boss for Europe, emphasizes.

What we now call artificial intelligence is what researchers prefer to call machine learning. A so-called neural network is trained very specifically on a single target and cannot do anything else. An AI that recognizes language cannot identify objects in images - and vice versa. So this has nothing to do with a general understanding of the world. Nevertheless, Bousquet also sees room for improvement in its own research. It is currently often not possible to understand in detail where a neural network makes a specific decision and when. Better analysis tools are needed here.

The universal translator is close

A prime example of where current AI is already achieving great success is speech recognition. This has made rapid progress in recent years. Thanks to new models, neural networks can now "understand" whole sentences and analyze them instead of just individual words, emphasizes Jakob Uszkoreit, head of Google's AI department in Berlin. Google itself recently released a new technology for pre-training neural networks as open source under the name BERT. In standardized tests such as the Stanford Question Answering Dataset, this now performs better than humans in speech recognition. But even such comparisons are very relative, as Uszkoreit emphasizes. This has little to do with a real understanding of language. To do this, the machine would also have to know how we otherwise perceive the world - and that is currently simply impossible.

Google AI boss Jeff Dean also sees speech recognition as one of the greatest success stories of machine learning. After all, one is currently just about to be able to translate language in real time - something that was completely unthinkable a few years ago or, at best, was assigned to the field of science fiction.

training

Uszkoreit also gives an insight into how you can get such good results. Among other things, texts from Wikipedia are used for BERT, in which individual terms are repeatedly hidden from the neural network. As a result, it gradually develops a kind of understanding of how a sentence structure works and the relationship between individual terms and others.

But it is precisely this that shows how important the choice of the data set chosen for training is. After all, the understanding of the resulting AI is based on what was previously presented to it. If one now chooses texts that are peppered with outdated social ideas, this could become quite problematic - and offend the users of a service based on them. Jen Gennai, the heads of Google's "Ethical Machine Learning" department, cites an example: Shortly before the publication of the Smart Reply function in Gmail, which suggests automatic replies to users, it was noticed that it had a strong tendency towards role clichés cherished. Fixed male or female forms were used again and again for certain professions - regardless of who the original mail was about. Here you have to make manual improvements.

This is exactly the reason why every new Google project that uses AI must first be examined. One tries to provoke mistakes in a targeted manner, emphasizes Gennai. And should this actually reveal major deficits, your own team could even force the launch date of a new product to be postponed.

Diversity

For the development of an AI that is as beneficial as possible for everyone, it is also important to incorporate many different perspectives, Google emphasizes. And that applies not only to the data, but also to the research. Not least because of this, the company recently founded its own AI research department in Ghana. Its head, Moustapha Cisse, reminds us that the same challenges do not arise all over the world. For example, voice AI has so far used huge text sources for training. And that poses a problem for regions of the world where relevant materials are not freely available to a similar extent. In this respect, it is important to find fundamentally different ways here.

At the same time, Cisse is also convinced that AI in Africa also has a much greater potential to contribute to improving social conditions. Even within individual countries there is often a large number of different languages, and automatic translation would be an important step forward in everyday life. But AI can also play an important role in warning of floods or in optimizing agricultural cultivation - two projects that Google is currently working on accordingly. The greatest potential could be in health care, after all, AI could relieve doctors of a lot of work. This ranges from handling daily paperwork to analyzing x-ray images.

health

So will we have to trust machines instead of humans in the future when it comes to medical diagnoses? Not necessarily, says Katherine Chou, product manager at Google AI's healthcare division. Our own studies show that the best combination is usually that of machine and specialist. At the same time, however, the data also show how much potential there is currently for better diagnoses - and not just faster ones. Our own studies on the subject of blindness in diabetes have shown that doctors only agree on average 60 percent with their colleagues in their findings. The really relevant number is different, however, as doctors only agree insignificantly more often with their own assessments in other cases - here the value is 65 percent. So there is a lot of potential for intelligent systems to achieve better results here. (Andreas Proschofsky, November 15, 2018)