Makes you code smarter

When Stephen Hawking warns of the end of the world because mankind cannot keep up with the development of artificial intelligence, one will listen carefully. When Tesla founder and energy entrepreneur Elon Musk says the same thing, there is good reason for concern.

For more than a year now, the two gentlemen, to whom one ascribes visionary talent for good reason, have not tired of warning of the dangers of machine learning. Both had also read the same book on the subject: "Superintelligence - Scenarios of a Coming Revolution" by the Swedish philosopher Nick Bostrom. So what is it about the apocalyptic scenarios?

The Skype founder is worried

On the sidelines of the Ted Conference in Vancouver there was the opportunity to talk to Jaan Tallinn about it. This is the Estonian physicist and programmer who co-founded and invented Skype.

And yes - it's quite scary what he said in the interview with the SZ tells. "As long as artificial intelligence is dumber than humans, we can treat it like any other technology. However, as soon as we deal with an artificial intelligence that is potentially smarter than humans, the situation changes fundamentally," he says. He also admits that there are technological developments that may be more frightening right now. Synthetic biology, for example.

But what happens when the machines can draw increasingly complex conclusions? Tallinn is afraid of that. One should not confuse this with the scenarios from science fiction films: "Films are a double-edged thing because they have to entertain. They do that with drama. But the most likely scenarios for the extinction of humanity are not particularly dramatic. There are not a heroic fight. It can happen very quickly. "

The fate of the earth always depends on the smartest doer

The greatest danger is that an artificial intelligence that is superior to humans does not care about the humans themselves. "You can turn it around as you want, the fate of our planet always depends on the smartest doer on our planet. It doesn't matter whether that's a person or something else."

But Jaan Tallinn is not a pessimist. He currently has ten projects running with which he wants to program artificial intelligences in such a way that they do not develop into a danger to humanity. One of them is the Future of Life Institute at Oxford University, which is headed by Nick Bostrom, mentioned above. In the interview, you can find out exactly what he's working on and how you can prevent the machines from killing us.