Why can computers never think like humans?

Can machines think?

The Turing test from a systems theory perspective

Alan Turing did not answer this question directly, but indirectly via the now practically feasible thought experiment of an "imitation game". His answer was that it could at least not be ruled out that machines (computers) can think if they imitate communication, for example as an interplay of questions and answers, so well that it is very likely that (people) cannot decide whether it is questionable Communication takes place by machine or by a human person.1 This is particularly the case in view of (then future) self-learning machines or algorithms, which in their application are able to change their own structures and thus potentially the simulation (?) Of communication - or that with it coherent thinking (?) - of the "imitation game" to continuously improve.

We will be able to answer the question asked at the beginning, with reference to Luhmann's system theory2 and thus from a radically different point of view than usual, relatively easily and directly. However, we will have to state that the simple, straightforward answer to the question is bought at the cost of the complex theoretical requirements of systems theory. From a systems-theoretical perspective, for example, it is by no means self-evident, although the question posed at the beginning suggests which authority differs from the machine thinking in question unquestionably Thinking allowed.

The seemingly obvious answer is that it certainly is People in contrast to machines is by no means evident on closer inspection. Arms and legs, for example, undoubtedly belong to humans, but these amputable parts of "humans" hardly have an important function in thinking. Further considerations usually lead to locating the thinking ability of "humans" in the brain. But is it unquestionably the brain, now also in connection with other parts of the body, such as the intestine, which can think? Is this in contrast to the questionable ability of a "computer brain" to think?

Even these simple considerations show that answering the question of whether machines can think requires a differentiated consideration of the undoubtedly valid premise of this question. Namely, that the ability to think, if perhaps not generally in humans, can be specifically located in the brain. We will see in the following that the system theory allows to consider "humans" in a more differentiated way than usual.

According to our considerations, it seems reasonable to use systems theory to distinguish at least three system levels. In addition to a mental, system level related to operations of the mind, a physical, also encompassing the brain or the neural network. Furthermore, there is certainly a systematic reference with regard to "man" as a social living being communicative operations to distinguish.

The presumed answer of Niklas Luhmann

It may not come as a surprise that the sociologist, who died in 1998, would have answered "no" when asked whether machines could think. It would have been surprising, however, that Luhmann would also have answered the question of whether the ability to think "in humans", and be it specifically in the brain, with a clear "no".

Which raises the question of which authority systems theory ascribes the ability to think.

The answer is that the place of thinking is thinking. As every person can introspectively determine for himself or herself, it is the thoughts themselves, the further thoughts, that enable a thought "flow". Thoughts are operationally connected to thoughts. The system-theoretical dictum that it is neither brains nor machines, but thoughts that think, is therefore not aimed at a tautology, a truism of definition ("thoughts are thoughts"). Rather, systems theory assumes that there are "autopoietic" systems3 which, with the help of their operations, in the present case thoughts, enable precisely those operations to continue to be maintained within their systemic limits. The place of thinking is therefore to be assigned to a system which does not have the system-internal possibility to think a final thought, a thought that would bring the system (one may call this consciousness) to collapse.

However, a "de-tautologization" of the self-referential operations of the consciousness ("thinking thoughts") is indispensable. Autopoietic systems cannot exist "for themselves" in a solipsistic manner, but only in that they are continuously - precisely on the basis of their specific operations - from an environment distinguish. The relevant environment of consciousness in maintaining its mental operations is obviously the brain on the one hand, and its communicative environment on the other. Obviously relevant because it can be shown with ease or can be experienced introspectively that by influencing the physical environment (e.g. with the help of pain, alcohol, psychotropic drugs), or by means of the communicative environment, i.e. through language, consciousness in one's own mental operations irritate leaves.

It is crucial to establish (or to establish introspectively) that neither physically (neuronally) conditioned environmental influences, for example by means of psychotropic drugs or alcohol, nor communicative environmental influences intended by means of language are capable of specific thought operations determine. This is exactly why in this context of irritations the speech. Both the social as well as the neural environment of consciousness form their own operative domains or systems which, due to their operational independence, do not allow any informative coupling. Within the neural system, neural "firing" leads to neural "firing". It is thoughts within consciousness that think. Communication communicates and constitutes social systems such as interaction systems ("conversations"), mass media or the scientific system

Precisely because thoughts cannot "diffuse" directly into communications, and communications cannot "diffuse" directly into thoughts, but rather form independent spheres both operationally and informatively, consciousness and communication are able to surprise, inspire and confound each other - by means of language. First the operational as well as the informative Decoupling of social systems operating on the basis of communication, and consciousness, i.e. systems operating on the basis of thoughts, enables mutual Lack of transparency. In other words, a reciprocal, principally impenetrable environmental complexity, which, in coping with this complexity, enables the mutual constitution and maintenance of consciousness and social systems.

Information, differences that make up differences5, therefore only reduce complexity system-internal constructible. That is, within the consciousness on the basis of thoughts, within social systems on the basis of communicative operations.

An operative or informative short circuit of the two spheres - communication would then be understood as the reciprocal transfer of information externalized between conscious systems - would collapse both consciousness and social systems in a kind of solipsistic catastrophe. Autopoietic systems, the existence of which is made possible precisely by the continuous maintenance of their own operations - through their own operations - in contrast to an environment, would lose their environment and thus themselves. Hence, neither communicative nor neural operations can be identified with processes of thinking or could they represent thinking. Rather, they are operations that are mutually reciprocal through and on their own Diversity enable.

The place of thinking is thinking. Neither a machine nor "man", not even specifically the brain, can think. From this perspective, Alan Turing's question of whether machines can think becomes meaningless. Because both the question and the unquestionable requirement of this question ("Humans, as opposed to machines, can think") must be answered with "No". The disposition of the Turing Test therefore needs to be reinterpreted.

New interpretation of the Turing test from a systems theory perspective

The Turing test is commonly understood as a method by which that only personally accessible thinking, if perhaps not identified, then with a high probability it can be indexed or inferred. This is true regardless of any questionable machine thinking. In this respect, the Turing test pragmatically connects to everyday experience, just as people (communicatively and mentally) attribute the ability to think to themselves in everyday social, i.e. communicatively arranged encounters.

From a systems-theoretical perspective, the Turing test cannot be regarded as a method with which (machine) thinking can be indexed, opened up or even identified. Rather, the disposition of the Turing test empirically clarifies the system-theoretical premise that the place of thinking in its systematics is thinking, or the systematic place of communication is communication. The "imitation game" aims to let communication communicate. In operational terms, communication ("questions") strictly follow on from communication ("answers") Not of thoughts. In this respect, thoughts can at best be thought or, as in the "imitation game", communication can identify or indicate communication.

The identification of thoughts through thoughts, or of communication through communication, presupposes a sufficiently high degree of operational complexity, with the distinction between self and external reference assumed here. A complexity that, once again, is achieved precisely because intellectual and communicative operations differ systematically. It is only in this way that a mutual increase in the complexity of these systems or environments is made possible in a co-evolutionary and self-reinforcing manner, precisely because the mutual challenges that increase due to complexity require an increase in the complexity of these systems and environments. Last but not least is Self-awareness with regard to mental operations and the possibility of (political, scientific, artistic etc.) Self-descriptions With a view to social, communicatively operating systems, expression of this co-evolutionarily enabled increase in complexity.

Communication actually requires a complex environment (usually conscious and neural systems). Just as (self) awareness requires a complex environment (usually a neural system and social systems). The decisive factor, however, to state this again, is the operational and informative complexity of the environment Diversity. Thinking works because it cannot be identified with communication; and communication works because it cannot be identified with thinking.

From a systems-theoretical point of view, the Turing test works in its disposition precisely because it is used to perform mental operations Not identified or indexed. Only because thoughts are strictly excluded in social, communicatively operating systems, the test is open to the possibility or conclusion that the - communicatively - achievable complexity of the "imitation game" is not necessarily due to the environmental complexity of thinking or neural operations is conditional. Rather, it is in principle possible to attribute the environmental complexity that is essential for the existence of systems to non-transparent machine operations, such as those of artificial neural networks, and thus at best artificial consciousness.

Evolution of Artificial Intelligence

The fact that, in principle, it cannot be ruled out that communicative complexity, such as could be ascertained through Turing's "imitation game", can also be caused by "artificial intelligence", i.e. also by completely different environmental operations than by intellectual and neural operations, can also be explained Open up a view of evolutionary events.

The autopoiesis of systems, i.e. their self-creation or self-preservation, presupposes a very specific type of operation. In the abstract, it must be about operations that can connect to precisely those operations that make these operations themselves possible in the first place. These are operations that can never appear "in themselves", but always require a network of operations of the same type. This type of operations makes systems possible - in continuous differentiation from their environments made possible by these operations - and is at the same time only made possible by systems (operational networks).

In terms of evolutionary history, the following operations can be identified that satisfy this type. First of all, with a view to cells, the paradigm of autopoietic systems, operations of this type on a molecular basis can be assumed.6 With regard to the ecosystem, the operation of reproduction (initially cell division) can be determined, which enables its maintenance (autopoiesis). Reproduction (such as cell division) enables reproduction, but for this possibility it already presupposes reproduction. An evolutionarily enabled modification of the form of reproduction through cell division - namely sexual reproduction - enabled the differentiation of different biological species. The demarcation of different biological species or their maintenance as systems is achieved through the reproduction of sexual reproduction. Reproduction (whether through cell division or sexuality) reproduces and enables reproduction living systems.7

Social systems were made possible by the evolutionary development of communication, or of spoken language utterances initially used for this purpose. Communication can also be assigned to the type of autopoiesis-enabling operations discussed above. Communication presupposes communication and communication can be linked to communication. However, it must be assumed that communicative complexity is only possible in differentiation and co-evolution to a correspondingly complex environment. In the distinction on the one hand from consciousness as a system that enables and presupposes thoughts on the basis of thoughts, and on the other hand in the distinction from neural systems.

Both the operations of the conscious mind and those of neural systems correspond to the type of operations that allow autopoiesis. In the case of neural systems, it is the operational dynamics of Changes in state of the neural network. The specific state of the connections between neurons as a neural network determines on the one hand the following state of the network, as it was also determined by the previous state of the network on the other hand. Changes of state in neural networks lead to changes of state lead to changes of state etc.

From an evolutionary theoretical perspective, nothing speaks against the fact that the operation of the state changes leading to state changes - for example: deletion and creation of artificial neurons or connections, strength (weighting) of connections - can be carried out on the basis of artificial neural networks. Ultimately, then, materialized through transistors, resistors, capacitors.

However, it has not yet been possible to construct autopoietic systems using such networks. Support by artificial neural networks in the processing of complex data with given input and output is, however, already successfully possible - at the level of social systems. For example when translating language or recognizing visual or auditory patterns. The now possible access to very large, digitized amounts of data ("Big Data") enables artificial networks to be "trained" ("Deep Learning"), ie the data processing operations possible in networks (ie changes in state leading to changes in state) at a given In - and adapt output (for example: texts and their validated translations) in order to then calculate the output of the input of new data of a similar nature.

It is astonishing that even at the level of data processing, the Lack of transparency can be observed from artificial neural networks. Accordingly, what applies to neural (or even artificial) networks or systems apparently already applies at the level of the intra-systemic processing of complex data (here in relation to social systems). These do not depict or represent consciousness or communication in their environment. Rather, it is precisely the operational and informative Decoupling and thus Lack of transparencywhich ensures mutual complexity (surprises, suggestions, irritations) and which only enables the constitution of systems. Systems cannot surprise themselves in their determination by their own structures, but can only be surprised by their complex environments (possibly even systems).

Accordingly, it is not systems themselves that can have creativity at their disposal, but this only arises from the operational and informative closeness of systems to their environments. Operations based on communications, thoughts and changes in state (neural or artificial networks) can only reproduce themselves in relation to their own operations, but they are able to surprise each other precisely because of this operational and informative closeness (reciprocally) to stimulate, to irritate. It is only in this way that, despite the determination of systems by their own structures, due to the self-referential nature of their operations, is the mutual construction (not transfer!) new information possible.

From this perspective, for example, the profitable, creative, "surprising", "unpredictable" moves of the Google software "AlphaGo" (in the board game "Go") result from the lack of transparency in the arithmetic operations (changes in state) of the artificial neural network used. From a system-theoretical point of view, the non-transparency of the computing processes, which can already be observed at the level of data processing, is not deplorable, not to be understood as a flaw, but rather testifies to it functional necessity.

Hopes and fears related to artificial intelligence

The fear that new, fundamentally autopoiesis-enabling operations, such as changes in state in artificial neural networks that lead to changes in state, can develop a (life-) destructive potential is entirely justified.

However, this assumption cannot be made plausible on the basis of speculations about the effects of "digitization" on modern society. For example, referring to the horror scenario that future generations of robots could "replace" people not only in the world of work. - No, rather, looking at the recent history of evolution, it can be empirically proven that the evolutionary development of new, autopoiesis-enabling operations can in principle have destructive, life-threatening consequences.

With the emergence of spoken language operations that ultimately enabled communication around 50,000 to 500,000 years ago - language leaves no fossils, an exact dating is therefore difficult - completely new forms of autopoietic systems could develop, namely social systems. In evolutionary terms, within a very short period of time, namely until today, social systems have led through their expansion, through environmental pollution and changes (e.g. as climate change, through soil sealing, through the agricultural industry, e.g. in the use of pesticides, pollution of water, etc.) a mass extinction of biological species.

In particular, it was a socio-evolutionarily enabled modification of the linguistic form, through which the operation of communication in social systems is maintained, which resulted in a kind of explosive development. Within 500 years (hardly a "blink of an eye" in the time dimensions of evolution) written form and its dissemination in book printing enabled an expansion and increase in complexity of social systems, which has now actually reached life-threatening proportions in the "human" caused mass extinction of biological species. In any case, it should be noted in this context that social systems that were maintained exclusively through verbal communication (tribal societies) reproduced themselves far more stably and, to a certain extent, more "environmentally friendly" than modern society.

It is not "the human being", but an evolutionarily enabled form of operation (communication), which enabled the constitution of social, extremely successfully expanding complex systems. These systems certainly do not pose a threat to the ecosystem, which ultimately operates on the basis of bacterial structures, but they are obviously and verifiably able to eradicate a large number of subsystems of the ecosystem (biological species).

In any case, it cannot be ruled out that the evolutionary development of a new type of operation - artificial neural networks - which could enable autopoiesis (the maintenance of operations through those operations), will in principle reach similar destructive dimensions as the operation of communication. The fact that this scenario is nevertheless rather improbable is not due to the (fruitless) development potential of artificial neural networks, but to the obvious (self-) destructive potential of social systems.

From a systems-theoretical point of view, it can be stated that social systems have developed a complexity that enabled extremely successful expansion, but are generally under-complex insofar as their self-descriptions usually do not take into account that specific systems can only exist in permanent differentiation from an equally specific one Environment. In other words: social (and probably also on the basis of thoughts operating) systems generally lack the sense of the necessary, restrictive prerequisites of themselves. The destruction of the environment is to be equated with the destruction of the system. A specific consciousness, for example, is existentially bound to its specific environmental conditions (neural system and communicative environment) and cannot be "decoupled" from them.

Particularly in connection with the research efforts on artificial intelligence, the often lacking understanding of the mutual existential dependency of systems and their environments becomes apparent. This is expressed both in the associated hopes and fears. From a systems-theoretical point of view, the expectation that it would be possible to preserve consciousness as a unit, as it were, independently of its specific environment ("mind uploading"), seems simply absurd. The empirical observation that even the smallest doses of substances (such as LSD) in the environment of consciousness, the neural system, lead to serious changes in consciousness should actually make this clear.

But also the fear that artificial intelligence could lead to threatening or even exterminating humanity is irrational. If (social) evolution actually led to autopoietic systems based on operations in artificial neural networks, these would existentially depend on the complexity of their environment, namely living, as well as neural, but probably also social systems.

A look at the history of evolution shows that new forms of autopoiesis-enabling operations did not in any case replace forms that had been common up to that point. On the contrary: rather, systems based on (sexual) reproduction (ecosystem, biological species), systems based on state changes in networks (e.g. neural systems), systems based on thoughts (consciousness), systems based of communication (social systems), dependent on one another in an existential way, are mutually dependent in their system-environment relations.

Last but not least, research on "mind reading", which basically assumes that it would be possible with the latest imaging methods (functional magnetic resonance tomography) to take a direct look into the "world of thoughts" of a neural system, testifies to a generally lacking understanding of the existentially necessary operational and informative Unity of systems in relation to their environments. (Jörg Räwel)

Read comments (88 posts) https://heise.de/-4117648Report errorPrint