Data science dies in 2018

Open Password - Tuesday March 26th, 2019

# 535

Information Science - Future of Information Science - Open Password - Data Science - Willi Bredemeier - Simon Verlag für Bibliothekswissen - Bernd Jörs - Information Science and Society - Winfried Gödert - Digitization - Transhumanists - Julian Nida-Rümelin - Nathalie Weidenfeld - Humanoid Robots - Humanistic Digitalism - Human Dignity - Combat Robots - Totalitarian danger - Hannah Monyer - Martin Gersmann - Wolfram Eilenberger - John Searle - Douglas Hofstadter - Immortality - Lexis Nexis Risk Solution - Claims Automation - Insurance - BIIA


An Open Password project - In the pipeline:

Future of information science
Does information science have a future?

"Information science is dead,
long live data science "

Edited by Willi Bredemeier - Simon Verlag für Bibliothekswissen, Berlin 2019 - Further publications in Open Password as well as Open Access publication on the website of an institution relevant to the industry

See also Open Password, February 25, # 518: Brief Description of the Book - Editor's Curriculum Vitae, - Open Password, March 5, # 523: The content: Fundamentals and perspectives - Offers in teaching - On the research fronts of information science - Open Password, March 14, # 528: The editor's foreword: Of adventurers in the spirit, ours Adding new insights to knowledge - science as a self-image, as a social system and as the development of sustainable solutions - Open Password, March 20, # 532: A new information science from the ruins! - What are the core areas of information science, what are its limits? - The first part of the book

The second part of the book:
Basic criticisms of information science


Information science reflections - A nest pollution with suggestions for realignment - By Winfried Gödert, April 27, 2016 (contributions have not yet been numbered)

Digital euthanasia? - By Winfried Gödert - # 372, May 29, 2018

Digital humanism or humanistic digitalism? - By Winfried Gödert - in this issue

Information science is dead, long live data science - By Bernd Jörs - # 422, August 20, 2018 - # 429, September 30, 2018 - # 442, September 24, 2018 - # 455, October 19, 2018 - # 458, September 25, 2018 October 2018

Information Science and Society (1)

Digital humanism
or humanistic digitalism?

From Winfried Gödert

Digitalization as a process that pervades the private and working world is on everyone's lips, but what significance does it have for the social concepts of the future? Has the train already left or do we still have design options?

A - publicly still small - parliamentary group, the transhumanists, sees digitization as a key currency to which people's ideas of their individual and social future must be subordinated [1]. Another - the established - view does not want to give up the unique selling point of the human being contained in humanism and advocates a digital humanism as a compatibility between the consequences of digitization and humanistic principles. This raises the question of whether there is an open and fair competition to shape our future or whether the position argued by the supporters of digital humanism has already been corrupted by the paradigm of the rejected position.

The authors of a current book [2], which was awarded the German Business Book Prize 2018 by the Handelsblatt, were asked in an accompanying interview entitled “Pipifax digitalization is against industrialization”: “We are on the threshold with AI to become godlike creators of new beings yourself. Can we and may we? ”You answer:“ We don't, we can't and if we could, we shouldn't. The software developers themselves are usually convinced that artificial intelligences, including humanoid robots, have no mental properties, do not pursue any intentions, have no desires, do not feel pain, do not even recognize or decide anything. "[3]

The authors will have well-balanced reasons for using such an answer to justify their position. However, we want to ask: Can such a humanism be preserved and further developed for the future or is it not rather the fiction of a human digitalism, a social order based on the wishes and possibilities of digitalization, garnished by humanistic décor?

In the absence of suitable benchmarks for comparing beliefs and attitudes, we cannot give a definitive answer to this question. We can only state that we are deeply involved in an ethical debate with this question, which demands clear answers and is not restricted to 'we don't do', 'we can't', 'we mustn't'. Things have been done for a long time - this column, too, contains benevolent commentary reports on the potentials and opportunities of the use of autonomous artificial intelligence, which affect the image of the future human being and assign him a role that is subordinate to the possibilities. What is allowed to be done at present is often geared more to what is economically desirable than to humanistic standards adopted through the history of civilization, without any corrections. It should not be a misjudgment to consider the euphoria about the potential to be dominant at the moment. Collateral damage and side effects are unsightly, but are generally considered to be manageable and neither lead to a fundamental change in behavior nor to a paradigm shift about the further penetration of all areas of life through digitalization.

An important humanistic standard finds its expression in our Basic Law, when it says at the beginning: “Human dignity is inviolable.” The factual situation in our society can be followed with sufficient sadness through the daily reporting. How digitization - or even transhumanism - can be combined with it would be the subject of a necessary broad-based discourse. What useful contribution can digitization make to human dignity if, for example, in highly acclaimed books, the discussion about the use of autonomous combat robots is dominated by self-confessed AI representatives from the point of view of how to avoid human weaknesses when carrying out acts of war and that robots do not have physiological weaknesses because they cannot be tortured [4].

The concept of human dignity as a basic equipment that cannot be questioned is a grand appearance that has to be converted into small coins in the private, social and professional environment. This is not an easy task, it cannot be avoided without damage. We can only be safe from - even mental - weakening of human dignity if we defend ourselves against it by means of suitable protective measures. In fact, this already finds its limits in a political environment that can often not be influenced by the individual. However, the basis for protective measures is developing in everyone's mind. Neither the sole observance of prohibitions or regulations is sufficient for this. What is required is the practiced interpretation and design of normative principles to ensure the desired image of people in the society to be developed. There are now many representations filled with robust arguments that show the potential for the formation of totalitarian structures in ideas that see digitization as the key currency for future developments and regard humanistic principles as fixed interest income, as it were.

So let's get back to our play on words: digital humanism or humanistic digitalism? If an idea of ​​digital humanism is not to be subject to the mistake of thinking that digitization is supposed to be useful for humanistic principles, one will have to fight for it. Digital humanism can only exist if the humanistic ideals serve as a model for the human being in the future social order and the ideas and procedures associated with digitization remain limited to the level of tools that serve human beings. The acceptance and exhaustion of the possibilities of digitization as a central design tool for the future development of society inevitably leads to a subordination of humanistic values, even if extreme positions such as transhumanism are disregarded. Clear evidence of this is the repeated use of the computer metaphor as an explanatory model for cognitive processes.

So let's not fool ourselves by considering the creation of a digital humanism based on our own humanistic tradition as a goal that can be achieved naturally. Rather, it can be seen as a hard piece of work that demands the conflict with currently prevailing economic paradigms and redefines the ability of each individual to keep rationality and human dignity compatible with one another. We could end up faster than desired in a digitalism that is committed to rational principles, but without a human face.

It is, of course, the weakness of posts like this not to trigger immediate action. Writing them anyway can only be justified by the fact that in phases of upheaval it may be necessary to comment on the standards that are developing into the mission statement from different perspectives in order to preserve the options for corrections.

[1] Cf. on the concept of transhumanism and its relationship to AI: Harari, Y.N .: Homo deus: a story of tomorrow. Munich: C.H. Beck 2017. Tegmark, M .: Life 3.0: Being human in the age of artificial intelligence. Berlin: Ullstein 2017.

[2] Nida-Rümelin, J., N. Weidenfeld: Digital humanism: an ethic for the age of artificial intelligence. Munich: Piper 2018.

[3] At: -ap3.

[4] Schmidt, E., J. Cohen: The networking of the world: a look into our future. Reinbek near Hamburg: Rowohlt 2013, especially pp. 292-312.

Information Science and Society (2)

Let's win immortality
by downloading your own brain
on the computer?

Did Winfried Gödert's above essay give you food for thought? If you want to learn more about this topic, we recommend:

Monyer, H., M. Gessmann: The ingenious memory: how the brain makes our future out of the past. Munich: Penguin Verlag 2017.

As well as a video that can be accessed via two addresses:

In this video, an interview by Wolfram Eilenberger with John Searle, you can also find the following passage:
Eilenberger (40 min.25sec.): So there is the idea, maybe it's just the phantasm. I have spoken to colleagues of yours, Douglas Hofstadter, for example. He says, well, let's see. The idea is that you download the content of consciousness, all of the information that your brain now has - you don't really know how - and then basically make sure that what makes you, namely your mind, doesn't have to die at all , while your body will eventually come to an end.

Searle (40 min.25sec.): Your idea is touchingly insane. Take all the information from the university library in Zurich, feed it to a computer and then have a computer that knows more than the smartest person. A computer never dies, when its hardware gets old you replace it. You make updates all the time, you also make backups in case you get run over by a truck. All of this is weird and ridiculous for reasons you know, because the computer does not have a brain, it has a simulation of a brain. He has no consciousness and we have no idea how to create consciousness.

Eilenberger: Well, I would say these promises are too nonsensical to be wrong. Nobody knows what he's talking about.

Searle: The word for it is called 'bullshit' in English. And we know why because they have no idea how to build a conscious computer. There is one more in-depth argument that I have not yet addressed. What we call information is within us. Information is relative to the observer, the machine has no information. It only has complex electronic circuits with which we can simulate information.


Balancing Automation and Empathy

LexisNexis Risk Solutions has released in 2019 “Future of Claims Report” revealing strong alignment between insurance carrier practices and consumer desires bringing opportunities to expand in automation for greater mutual benefit. With automation becoming more pervasive, insurance carriers are creating efficiencies and reducing costs, but while consumers expect their insurers to offer easy digital access to products and services, they also want a personal touch.

LexisNexis surveyed 24 senior-level auto insurance executives from the top 50 automotive insurance carriers as well as 1,755 auto insurance purchasers between the ages of 25 and 65, and the results all point to a careful balance between claims automation and empathy. The report reveals key insights demonstrating that automotive insurance carriers are continuing to embrace virtual claims processes with 95 percent using or considering virtual handling.

• Touchless claims are also growing in popularity with 79 percent of carriers surveyed open to or considering using it; up from 42 percent 18 months ago.

• Consumers with prior claim experience exhibit rapid reduction in claims satisfaction when they have to talk with more than one person.

• One in five consumers currently prefer claims self-service options, but complain that the self-service first notice of loss (FNOL) process asks too many questions.

• Carriers already using claims automation report a reduction in touches (removing 1-4 manual touches), faster cycle times (1-15-dayreduction per claim), increased employee productivity (50 percent reduction inprocessing cost) and lower loss adjustment expense (3 -10x more cases processed per adjuster).

A Path Forward for Claims Automation: When evaluating the factors that most influence customer satisfaction with the claims process, empathy emerged as the most impactful, illustrating that carriers must look for ways to integrate the human touch into automated processing. Furthermore, the study revealed that consumers are letting their fears hold them back from fully embracing self-service claims automation, especially on those customers who have not dealt with a recent claim.

The future of automation: As technology advances, carriers believe that their businesses will implement enhanced automation in the next 3-5 years, with forward-leaning carriers expecting the share of their touchless claims to range from 15-95 percent as they continue to figure out how to operationalize touchless claims. The LexisNexis report also found that as automation for non-complex automotive claims continues to enhance and expand, it will make the most significant improvements to FNOL and repair estimates for customers. Estimates and investigations will also be altered significantly by artificial intelligence (AI) and advanced analytics, with forward-leaning respondents planning or considering an increase in the use of advanced analytics and AI in the next few years.

Source: BIIA

Open Password

Forum and news
for the information industry
in German-speaking countries

New issues of Open Password appear four times a week.

If you want to subscribe to the E-Mai service free of charge - please enter it at

The current edition of Open Password is available on the web as soon as it is published. This also applies to all previously published editions.

International Co-operation Partner:
Outsell (London)
Business Industry Information Association / BIIA (Hong Kong)