What do social ontologists think of Cycorp

Abstract - summary

Semantic Publishing in an Interdisciplinary Scholarly Network. Theoretical Foundations and Requirements

The study examines preconditions to adopt semantic web technologies for a novel specialized medium of scholarly communication that - also interdisciplinary - enables the synchronicity of publication and knowledge representation on the one hand and the dynamic bundling of assertions on the other hand. Therefore it is first of all necessary to determine a concept of “(scholarly) publication” and of neighboring concepts. These considerations are fertilized by theories that can be related to the radical constructivism. Therefrom derives a critique of the mainstream of knowledge representation that resigns to being not able to represent the dynamics of knowledge. Finally the study evinces a conceptual outline of a technical system that is built upon the known concept of nanopublications and is called “scholarly network”. The increased effort while publishing in the scholarly network is outweighed by the benefits of this publication medium: It may help to render research outputs more precisely as well as to raise their connectivity through reducing the complexity of assertions. Beyond that it would generate an openly accessible and finely structured discourse archive - a wide participation provided.

The study examines the prerequisites for the use of semantic web technologies for a new type of special medium of scientific communication, which - also interdisciplinary - enables the simultaneity of publication and knowledge representation on the one hand and the dynamic bundling of statements on the other. To do this, it is first necessary to define a term “(scientific) publication” and neighboring terms. These preliminary considerations are fertilized by theories that can be assigned to radical constructivism. This then leads to a criticism of the mainstream of knowledge representation, which resigns itself to being unable to represent the dynamics of knowledge. At the end of the study there is a conceptual sketch of a technical system that is based on the well-known concept of nanopublication and is called a “science network”. Despite the likely increased effort involved in publishing in the scientific network, the advantages of this publication medium predominate: It can help to specify research results and, by reducing the complexity of the statements, increase their connectivity. In addition, with broad participation, it would produce a freely accessible and finely structured discourse archive.

Preface

The present work is a revised version of my master’s thesis, which I submitted in May 2014 under the title “Scientific Publishing in the Semantic Web - also in Cultural Studies” at the Institute for Library and Information Science at the Humboldt University in Berlin. The change in title is due to the fact that I originally wanted to limit myself to developing an idea for a semantic publication medium that would meet the requirements of cultural studies, which is why I submitted an appropriately formulated topic to the examination board. It was only towards the end of the processing time that it turned out that this disciplinary restriction is neither necessary nor useful.

I would like to thank my supervisors Peter Schirmbacher and Martin Gasteiner for all their suggestions. Their reviews gave the original thesis an average grade of 1.3 (very good). The constellation of being looked after by an information scientist and practitioner on the one hand and by a cultural historian with an affinity for computer science on the other was my wish in order to receive information from the group of potential service providers as well as from the target group at an early stage of my considerations. This wish is also reflected in the composition of my lecturers, who were recruited from information practice and literary studies (Wolfram Seidler), ancient history (Stefan Paul Trzeciok), computer science (Gerhard Gonter) and sociology (Kaspar Molzberger). I am infinitely grateful to everyone for their comments and corrections!

1 Introduction

The dynamics of the Internet can only be fully exploited for the success of scientific communication if the researchers have access to special media that also support the dynamics of communication itself. In the context of communication, meanings are never fixed. The complexity of scientific communication should not be reduced at the expense of the plurality of meanings, but through the distillation of meanings and statements, their dynamic bundling and avoidance of redundancy. For the time being, the Semantic Web should be seen as an experimental communication medium rather than as a knowledge repository that holds truths and that can provide exact answers to all questions. Because: There is no such thing, even if ontologies claim it.

Apparently it is from parts of the experts1 the knowledge organization refused to deal with ontologies for the semantic web: If one speaks of ontology, one should understand it as a philosophical basis, “not to be confused with homonym schemes for machine treatment of semantic information” (Gnoli, McIlwaine, and Mitchell 2008)2Which at the same time reveals that in 2008 there was still a rather vague idea of ​​what the Semantic Web can and should do. Publication practice also gives hardly any indications that the chance to considerably facilitate and accelerate the generation of knowledge by automatically linking statements transported via publications with the aid of a computer has already been recognized (see e.g. Neylon (2012) and Bourne (2008)) .

The current state of scientific reflection on semantic publishing is almost entirely limited to suggestions for specific applications. The present study uses Niklas Luhmann's theory of social systems to investigate the basis for the adaptation of semantic publication technologies in science. This theoretical decision was made because system theory not only provides extensive analyzes of the scientific system and a sophisticated concept of communication and thus of society, but also through its integration of a theory of form and thus of media. When examining the fundamentals of information science knowledge representation, this set of instruments enables me to gain fascinating insights, which are to be made comprehensible in Chapter 3 in particular.

All of the more concrete proposals I know of for semantic publishing - with the exception of one, see Chapter 4 - come from the life sciences.3 Although the Semantic Web due to the numerous edition projects of the Digital humanities should also be known in cultural studies, it has so far hardly been considered as an application for contemporary publications, as far as I know. The cultural sciences are particularly interesting for my question, because the case seems distant and obvious at the same time. On the one hand, the Semantic Web has so far only prevailed in the processing of clearly structured data: taxonomies in biology, genetic research, geography, but also in the cultural field: e. B. for information about music and films - facts: yes, analyzes: no. On the other hand, cultural studies are predestined to use a medium that allows pluralities of meaning and can depict discourses. It is no coincidence that the Semantic Web has so far hardly been used for this purpose, because the ontological thinking on the basis of which the Semantic Web has started to be built does not allow anything other than collections of facts. How do abstract concepts with the struggle for their determination and the connection of these concepts to statements, i.e. theory, in a controversial form, as a scientific publication, get into the semantic web? However, it will quickly become apparent that this requirement is not discipline-specific.

Due to their methods and the inherent importance of theories, large parts of the humanities and social sciences generally have a need for tools that facilitate the quick grasp of definitions of different origins and their theoretical contexts. The use of the term “cultural studies” in the following is intended to include precisely those areas of the humanities and social sciences, the methods of which are not primarily empirically oriented, since there extended tools that are not to be dealt with here may meet the specific needs.

The cultural sciences are indeterminate in their subjects and methods and are precisely determined by this: Everything can become their subject; There is no set canon of methods. To speak with Paul Feyerabend, there is anarchism of the theory of science. What makes cultural studies research recognizable as such is its special form of circular self-reference: The quality of a cultural studies research result can only be judged on the basis of the exhaustion of the chosen methodological and theoretical program for the analysis of a social problem. As a result of this exhaustion, such research then inevitably leads to an adaptation of theory and method, which is to be regarded as a research result, as it were. “So by using a certain method, a certain procedure, a certain vocabulary you cannot improve results on the same scale - you can only do them differently” (Daniel 2006, 15). This leads to enormous degrees of freedom, especially in the structuring of scientific documents and the design of publications, which in turn makes it more difficult to prepare them for processing by machines, since clear, expected patterns can not be recognized in the structure of the texts or the argumentation. Such patterns are used in the natural sciences as an entry point into the Semantic Web for publications.

The study assumes that the functioning of semantic web technologies as they are liked as a step model4 is known in its essential aspects. The advantages of these technologies should be put in a promising relationship with the demands of scientific communication on a communication medium and an idea of ​​a suitable system should be developed. For this purpose, proposals based on communication theory must suffice in the context of this study. These observations can then be operationalized for empirical research to verify the presupposed requirements. In addition, they can serve as a basis for technical developments.

The social discourse now makes it possible to set an open access premise for transitive considerations on science communication,5 not least because of political decisions on the right of second publication in Germany via the 8th EU research funding framework program Horizon 2020 towards similar developments in many countries around the world. In addition, major organizations work like the Open Knowledge Foundation the establishment of the open science idea, with which not only the free accessibility to research results is linked, but also to the means and tools of research, among other things. From the perspective of this study, there is then the requirement that the software with which publications are produced must also be open source.6 In addition, permanent and persistent availability is required for research results.7 Electronic publications, which are assumed here as a normal case (see also Kaden 2013), have the disadvantage compared to printed ones that they can be imitated and falsified more easily if their integrity is not protected by special measures, which are not discussed here.

That being said, certain acceptance problems of electronic publishing in the humanities seem to arise from the fact that it is now quite easy to produce and present a publication that does not look any less professional than classic publishing products without using publishing services. As Kaden (2013) also notes, this makes the content The professionalism of electronic publications is sometimes called into question. All suggestions for electronic publication formats will arouse this skepticism for the time being. This seems to be a typical transitional phenomenon that occurs whenever media usage changes. One chance, however, would be to create a format that can no longer plausibly be described as an imitation of the analog format. This is also linked to the development of new quality assurance and thus reputation creation mechanisms, the investigation of which, however, should be reserved for other studies.8 The publishing industry is undoubtedly changing,9 but a change in the social function and internal forms of publication does not seem to be in sight at the moment.

The adaptation problem when introducing a new type of publication medium has several dimensions:

  1. In the factual dimension The unbroken demand for printed books in cultural studies and for PDFs in all disciplines seems to confirm the researchers who publish in this way (cf. Börsenverein des Deutschen Buchhandels 2013). On the other hand, questions about the Semantic Web are particularly in the social sciences and humanities, and here almost exclusively in the Digital humanities, very few researchers. This study explains the extent to which a new media break can free science from traditional selection mechanisms that have not been adapted to the new digital conditions, from duplicate research and laborious research. At the same time, an idea of ​​this new medium is sought that can be easily conveyed to researchers without going into technical details.

  2. In the temporal dimension you have to go to great lengths to formulate statements in such a way that they are machine-readable. Even the original form of semantic labeling, the creation of registers, could never develop into a general standard despite its great popularity with the recipients. However, if one assumes that cultural scholars continue to see their text, which is sometimes understood as an artistic achievement, as a starting point, then it seems likely that the examination of their own text, which is intended to produce a distillate of the statements contained, also increases the clarity of the argumentation in the source text promotes.

  3. The social dimension The problem cannot be solved so easily: Since the science system does not currently provide any formal recognition for the indicated effort, there are hardly any incentives to make it. There is a Nash equilibrium between the established researchers: Semantically publishing is only attractive if the others do the same. Without a critical mass of statements that can be linked, the underlying system cannot be used for research purposes and your own investment would be wasted. Nobody will be the first to change their own publication practice, as the time required for this gives the competition advantages. The only way to escape the Nash equilibrium is through conviction.

After all, the results of this study cannot be aimed primarily at the publishers themselves, but at those institutions that, with their research, simultaneously promote the scientific system to which publishing undoubtedly belongs. The authors should “be relieved of the specific technical problems, but be ready to help develop professional working environments” (Mittler 2011) in order to achieve a meaningful division of labor. The publisher10 are not those who promote the science system, but they can be understood as service institutions on behalf of those sponsors. More and more often there are above all those publisher under criticism who hold huge market shares.11 Researchers as well as science management and libraries are faced with a poor, but hardly negotiable, price-performance ratio. The publishers' willingness to innovate cannot always be reconciled with the state of technical development. Therefore, the service facilities of research institutions should not leave the field entirely to the publishers and should at least know the level of development in order to be able to respond to the - presumably still rare - inquiries from authors. Ultimately, the researchers decide whose service they prefer and who can thereby establish and develop themselves. However, especially for publicly funded institutions, sufficient incentives should be found to offer alternatives to commercial solutions in the network, including above all: long-term controllable costs and the weakening of the market position of the commercial. Highly innovative micro-projects that are conceived from the outset as experiments rather than viable broad-spectrum technologies are still being funded again and again. The present study aims to contribute to the conception of universal scientific publication infrastructures.

The focus of the 2nd chapter of this study are definitions, opening up the horizon of possibilities.This includes definitions of the scientific publication and document inspired by system and form theory, as well as an explanation of their significance for scientific communication. On the one hand, the third chapter aims to explain why a new scientific publication medium has to be sought and to what extent the Semantic Web is suitable for this. On the other hand, the concepts of knowledge representation, semantics and ontologies that are currently dominant in the information sciences pose difficulties: They are poorly compatible with the way in which scientific communication works, as explained in the previous chapter. Towards the end of this chapter, the requirements for the new medium are formulated. The 4th chapter. searches for traces of already implemented implementation of these requirements and offers a streamlined overview of the current articles on the topic of semantic publishing in order to show connection possibilities. Finally, in Chapter 5. Developed an idea of ​​a new medium from the space of the possible and named it “semantic science network”. However, its implementation requires several paradigm shifts, as already indicated in Chapter 3. The effort promises not only a better fulfillment of the current requirements for the publication system, but also the automatic generation of new hypotheses and the easy detection of inconsistencies in scientific communication.

2 Function and form of a scientific publication

A designation should not be defined according to what appears to be the appropriate meaning as a term, but the other way round: The definition is derived from the everyday language use of the word on the one hand, and its professional-scientific uses on the other, i.e. through the analysis of different meanings of the same designation and, at best, also of similar meanings of other terms. This consideration of semantics can prevent misunderstandings.

2.1 The scientific publication

As a basis for determining a publication term, the Bern Convention for the Protection of Works of Literature and Art12 from 1979 because it has the character of a legally and politically binding decision as the result of an international discussion process. There was a struggle for common terms not only with regard to harmonization with the different national copyright provisions: The agreement had to prove itself in practice and therefore meet the protection requirements of the authors at the time. A consensus was thus established that provided the legal system with the necessary prerequisites to be able to provide the services necessary for both the scientific and the art system.

In the Berne Convention, three main features of a publication are emphasized that make a publication: The dissemination of the work under Consent of the authors to an extent that the Public needs seems appropriate. This term of publication excludes the presentation of an original work that is opposite to his reproduction a much higher social prestige transported, so z. B. in performative works or in architecture.

When Riehm et al. (2004) define a communication process as a publication that is characterized as indirect, mediated and asynchronous and “intended for the public, for a more or less anonymous audience”, almost all three aspects are missing. Such a definition, even if it were only related to scientific publishing - a restriction that it does not explicitly include itself - comes into considerable conflict with international copyright law: a work term is completely absent and the degree of dissemination appears too vague.

It also seems questionable to describe the publication as a communication process, since the general linguistic usage knows the term on the one hand as an object and on the other hand as a process designation, but then means the process of assigning a work from its singular existence to the area of ​​perceptibility by many promote. This also includes the aspect of reproduction from the Berne Convention, even if electronic publishing is not a reproduction in the technical sense (see Section 2.2), but in a social sense: the repeated and yet each individual perception coupled with the reproduction of behavior typical of science. However, the publication itself can initially only be observed as an event. Content-related aspects, here: Research results are imprinted on the publication medium, but require reception in order to get social processes going.

Riehm et al. However, they do not opt ​​for communication about the content at all, but place the acceptance or rejection of a manuscript at the beginning of the publication process, which in the second case would probably end at the same time, which otherwise would be before the start of reception. But what if this does not happen? Is it possible to speak of communication if only its prerequisites are used, but the realized social component tends towards zero?13 The case of non-perception of the publication is evidently ruled out by the mandatory use of “recognized channels”. Thus preprints would not count as a publication and of course also no works that were made without the aid of a publishers have been made available on the Internet. The objection would be that the function of any form of assessment appears questionable as long as it is supposed to do without the determination of content-related criteria and remains solely a formal condition. If you follow the definition by Riehm et al., You have to conclude that gray literature is not considered a publication. However, the copyright applies. And: what would it be then? Riehm et al. mean: common documents.

The Deutsche Nationalbibliographie contains in “series B - monographs and periodicals outside the publishing book trade. Books, [...] and electronic publications ”.14 According to a classic textbook of library science, “published information [...] contains documents [...] in analog or digital form that are produced, reproduced and made available to the public or by publishers, political, social or private associations, organizations or institutions . a part of the public determined, issued ”(Umstätter 2011, 10f.). In addition, the evaluation of gray literature is also gradually changing: studies show that its quality is assured in most cases and that there is a demand for it. There is therefore no longer any reason why it is collected less intensively by libraries than by publisher published literature (see Gelfand and Lin 2013).

The asynchronicity of writing and reading as well as the mediation are not special features of the communication medium publication, since these characteristics apply to any communication that makes use of the medium of writing or takes the form of an audiovisual broadcast. It is a “necessity of mass communication” (Luhmann 1997, 308). The interaction among those present15 is characterized by the possibility of mutually orienting oneself on the behavior of the other, assuming expectations, perceiving reflexively, i.e. speculating about how one's own behavior is perceived by the other. In order to maintain social relevance, actions must be assigned as such by an observer. Luhmann calls this observer “ego” and the agent “alter” because the social system only gets going when the ego feels addressed. Under normal conditions it is difficult to ignore messages within an existing interaction system without disrupting the system. Ego then differentiates information from the message and understands something, mind you, completely regardless of what the other, Alter, may have meant - that inevitably remains intransparent. This selection process changes through the use of media that make reflexive perception impossible. On the phone, for example, it is still possible, to a limited extent also in chat or in personal correspondence or e-mail, which are characterized by their media use as special forms of interaction through an absence, i.e. an absence that is constantly present in the interaction.

As soon as a scientist distributes an article for an unspecified audience to read, the communicative selection process described varies16 As follows: The audience must firstly feel addressed, secondly select information from the form of the message and thirdly recognize the possibilities of communicative connection, i.e. understand it. The form of the communication results from the specific communication medium - we will return to this. Connections in the form of citations will not be the first communicative connections here, but rather activities in the context of magazine marketing or the dissemination of a publication via social networks. The difference to the selection process in interactions, namely the impossibility of reflexive perception, has massive consequences: Ignoring in most cases has no consequences for the audience, but negative connections on the same level of communication17, namely those of theoretical worldwide distribution, can, however, have enormous consequences. Another significant difference is the chances of correcting what has been communicated. While it is normal in interactions to contradict oneself, this is difficult to do at the level of society-wide communication, especially as long as that communication relied on the printing of the article. And here we return to asynchronicity: This was extremely important as long as enormously time-consuming and cost-intensive processes had to be used in order to be able to communicate across society. In addition, reporting a correction was extremely unlikely to attract attention. This applies to a much greater extent to both scientific and mass media dissemination media, as they are characterized by a longer frequency and, months later, probably no one will notice the correction of an article that is hardly remembered. With the Internet, however, there is now the chance of timely correction, which is closely linked to the initial publication. If the publication platform used provides annotation tools with which public comments can be made during reception, the asynchronicity approximates to the point of insignificance the synchronicity that prevails in interaction systems.

At this point, a definition that needs to be explained further is to be formulated, which has to distinguish itself from all publications in modern science18 to be applicable from the 17th century to the present day and, if possible, also in the future, i.e. independent of the distribution media used and the respective recognized publication formats, both in terms of content, structure and technology. It should read as follows: A scientific publication is the distribution, initiated and signed by its authors, of a scientific document referring to original and earlier research results of other referring to it, in order to make it accessible to a worldwide audience.19 The importance of reputation in the current scientific system would run counter to anonymous works, which is why drawing by authors is an essential part of a scientific publication. In addition, a person's name serves as a primary reference point in science. Now this definition needs to be explained further: What distinguishes an original research result? Why does the definition have to contain the reference to previous research results? What is a scientific document? This can only be fathomed if one knows the function of the publication for modern science.

Publications in science are analogous to payments in business: They refer to other publications, i.e. to elements of the same type. You cannot pay without first receiving a payment. The new payment will also trigger payments again: the buried treasure has been withdrawn from the economy for the time being. It is the same with publications: the stroke of genius in the drawer is a communicative dead end. Likewise, there can be no scientific publication that does not generate self-references to the scientific system through citations. Such self-references are currently the most important and indispensable method for science to reduce complexity in order to connect specifically to the previous communication of the science system. Knowledge of certain research results, in particular publications, is a prerequisite for a productive scientific discussion. If this canon were not reproduced and expanded anew with each publication, the discussion would get stuck. Publications therefore ensure the operational closure of the system, its unity. Citations create a network of interactions that rely on absence. A dialogue develops on the previous execution of which the publication itself is based, and at the same time it spins itself with the network while describing it. Publications are therefore simultaneously responsible for the formation of structures and the description of structures in science (Stichweh 1994).20 It sounds a bit like communication is based on an accurate plan. This is not the case: “Some texts are read, some at the right moment. With a high proportion of randomness, this results in new texts for which the same applies ”(Luhmann 1990, 59).

According to Gérard Genette (2001), citations are part of the “paratext” of a text. He named his well-known work, which exclusively deals with paratexts from artistic literature, in the French original Seuils: Swell. The prefix “Para-” can also be understood in the same way, as it describes something antithetical, the oscillation between two sides of a distinction which, however, has a preferred side, namely the text. The paratext refers on the one hand from the text to the outside, on the other hand from the outside to the text. However, the text cannot simply be thought of as a sequence of sentences in the paragraphs of the main body of a document: Paratext also creeps between the lines. A sentence can contain text and paratext at the same time, if it appears as peritext, i.e. inscribes itself directly in the text, e.g. B. Citations. But there is also peritext that positions itself in the context of the text, such as keywords. The increase in this would be epitexts that themselves take on a closed text form and have their own paratexts, such as reviews. As a “factual paratext”, paratext can also dispense with an explicit message format and subliminally burden the reception of the text, e. B. if the author is known to belong to a certain theory school, which is, however, constructed by observers of science communication and is not a self-positioning of the author in the text.

The paratext of scientific publications is precisely that which does not belong to the research result to be published, but which only makes the publication recognizable as scientific, e.g. B. the footnote apparatus and the references to the publication in another publication. Genette goes even further: “The paratext is that accessory through which a text becomes a book and as such appears in front of the reader, and more generally, in front of the public” (Genette 2001, 10). There is one important restriction, however: Genette restricts the term to the accompanying work that supports the author's intention, namely: to be recognized and received as a scientific publication. Even if Genette talks about the author and his allies, this does not mean that the author has to authorize every paratext or that it must not contain a negative evaluation of the text. Every paratext that links the text with social communication, i.e. makes it recognizable as - positive or negative - compatible, basically increases the likelihood of communication.21 Genette himself allows this conclusion when he summarizes the function of the paratext, here tailored to art communication, as a sluice set up “between the ideal and relatively unchangeable identity of the text and the empirical (socio-historical) reality of its audience”, in order to be able to “stay on the same level” (Genette 2001, 388f.). When published in a scientific journal, the title and URL of the journal become paratext, as it were, as does the publication on a document server. For the reception of the text and for the likelihood of communicative connections, however, it makes an enormous difference where the paratext is inscribed.

Citations have the special, if not exclusive, property of being paratext of two publications and thereby connecting them. Depending on the point of observation, a text is then text or paratext.A citation reduces the previous publication to at least part of its knowledge gain and thus generates a knowledge gain itself, i.e. a research result. There seems to be no question that novelty also plays a special role in science, in two respects: On the one hand, especially in the natural sciences, there is a preference for more recent, i.e. more recent research results that need to be linked.22 On the other hand, new knowledge is different from what is already known. In this case, novelty is, depending on the case, a feature in a temporal or a factual dimension. A new paradigm does not necessarily have to be set up for scientific novelty in the factual dimension (cf. Kuhn and Krüger 1978), it is also sufficient to deviate from what is expected, which leads to new explanations that do more than just add something to the existing explanations: They shift those perspectives with the help of which, in turn, the search for new knowledge is sought (cf. Bachmann-Medick (2009) and Luhmann (1990, 216f.)).

Such original findings not only have to use a medium of dissemination in order to get the chance to become part of a citation network, but independent of this external form only certain content-related forms come into question in which such truth communication can appear at all. Due to the previous production of knowledge by science, it is impossible to declare certain results to be true: The use of certain methods is dubious and any conflicts of interest must be communicated. Such and other forms shape a medium of success, a symbolically generalized communication medium23which is then preformed for truth communication and can itself be called truth. But, according to Stichweh (1994), what is not published is not science, even if it is true. So the medium of science, truth, can be used, e.g. B. in intellectual disputes in a cozy group by sticking to certain rules, z. B. does not pass the ideas of others as their own. Scientific communication on the level of society, however, also requires a medium of dissemination that helps it to be perceived worldwide.

Dissemination media can fulfill their purpose for science, namely to make it more likely that communication offers will be connected to (see section [probability]), the better they can achieve, the more potentially interested people who also publish them can reach them. This can be achieved on the one hand by specializing these media on science, but also on certain disciplines or even topics. A complementary media strategy can also make communication more likely: a universal one one stop system, equipped with individual notification services and a sophisticated search engine as well as content optimized for finding by external search engines. It is not easy to meet the requirements for the publication system and the publication that result from this or that selection-promoting method. However, even their below-average fulfillment can be offset by the reputation of the author and make questioning the medium superfluous (see Ellison 2011). In addition to many other aspects that build up horizontally around the topic of the research result, reputation is communicated with every publication, or rather: reproduced communicatively.

It is not possible to define general characteristics that a publication medium must have with regard to orientation and reputation. In each specific case, it is important to compare the requirements and the possibility horizon. Instead, reference should be made here to the potentials that result from content-related and technical formats, initially independent of a specific publication medium, which many can find on the Internet, but, as we will see at the end of this study, few that are given Are able to exploit potential.

2.2 The form of the scientific document

If a publication is the dissemination of research results, these must be available in a disseminable form: as a document. These can be measurement results, but they are difficult to consume by humans - and: They would lack the self-references that they call scientific Make publication easily recognizable. Based on the logic of George Spencer-Brown (1969), the following form arises:24

Scientific publicationScientific documentResearch results

Of course, starting a distinction like this one is contingent; it encroaches on a space of possibility that can only be determined more precisely through its indeterminacy.25 The inside of the distinction, here z. B. the scientific document, on the other hand, has to be designated against the background of the indefinite, so that the outside, here the research result, as excluded from it, no longer contains all possibilities. The distinction therefore always relates to the outside, even if only the inside is referred to. The form is inseparable from the two sides that distinguish it. In doing so, she brings something third into the form: the observer with his or her individual scope of possibilities. New spaces of possibility, no longer equipped with all degrees of freedom, open up with every designation, the form creates a new medium in which other forms are imprinted, here the scientific publication, for which the same applies in each case. An informal space cannot be observed. With regard to the degree of certainty or the degree of freedom of any further distinction, cascades of forms result in a hierarchy which, here and in the following, should help to better understand what constitutes a scientific publication.

In the proposed form of a scientific publication, the distinction begins by delimiting scientific documents from the broad category of the research result. The research results are to be determined here as products of their own research activities defined by the researchers. Rudolf Stichweh regards research as the second autopoietic element of science alongside publication (1994).26 Of course, the determination of research results takes place according to certain norms and values ​​that make a perception of others and connections more likely. While the publication specializes in the generation of self-references and self-descriptions of science communication on the level of society, the research activity seems rather to provide for connections to the level of interaction, which are necessary to cope with everyday life in the laboratory and at the desk.

There can be a variety of scientific documents with the help of which research results are published: Above all, research data should be considered here, which due to the current efforts to establish standards for their citability are increasingly taking on the character of a scientific publication. The distinction between scientific publication and scientific document must therefore be viewed as precarious today, although very little research data currently contains references to previous research by others, which are, however, necessary to give them this status. As long as research data acts primarily as a paratext of publications, this will not change.

The form of the scientific publication can also be constructed based on the document:

Scientific documentDigital documentDocument

As stated in the introduction, an electronic publication is to be assumed as the normal case; accordingly, a conception of the digital document is of particular interest here (cf. Buckland 1998), whose determination should nevertheless be attempted from the outside of the form: with the etymology of the document term. The Roman-Latin documentum means something like “example, proof, lesson”, especially in a didactic sense, and does not necessarily refer to anything material. With the emergence of the nation states since the 17th century, the concept of a document has on the one hand its function to denote something written and on the other hand the primarily didactic connotation is replaced by a legal and bureaucratic one (Lund 2009). A document had to contain information that has relevance potential beyond the personal proximity of the owner or author. The new requirement of authenticity corresponds to the zeitgeist of the 18th century. Documents can be “real” or “forged”. Who decides the originality? Depending on the type of document, a person is required to be assigned the appropriate competence and power, e.g. B. a civil servant. About the originality and documentability z. B. can also be decided in court if a simple letter becomes evidence. It took another 100 years, for example, until the first archaeological finds were considered contemporary documents. To do this, archeology first had to be recognized as a science.

The dissolution of these conditions - guaranteed originality and evidential value - may begin with the spread of word processing software on home PCs, because since then the file extension .doc has been considered a document; And when every home PC owner can create such documents, it is no longer far to understand every character string held legibly on a storage medium - for whomever or whatever - as a digital document. The designation has lost much of its potential to describe content and is therefore likely to be used more and more in the English-speaking world resources replaced, which is more inclusive, but provided with less disturbing historical connotations. So if a term is needed that includes the aforementioned historical aspects of the document exclusively or other, additional ones, one should look for a term that has not already become part of everyday language as much as the document term. For example, in order to mark the ascertained archiving worthiness of a digital document, one could speak of a digital archival document. Of course, digitality has consequences: the fact that a digital original cannot be distinguished from its copy also makes the concept of reproduction obsolete: Either the copy is exact and therefore identical, or it is not and therefore not a copy. The aspect of reproducibility, which was borrowed from the Bern Convention for the definition of a publication at the beginning of this study, is already included in the prerequisite for the digitality of a current scientific publication and therefore does not need to be specifically mentioned in the definition. One might object that not every electronic publication is reproducible because it is copy-protected. Such publications contradict the definition elsewhere if one interprets it strictly: They are inhibited from dissemination. In a less strict interpretation, they are considered a publication as soon as they are distributed to an interested public at all.

How does the digital document relate to the file? A good example is a LaTeX document: Is this the “output”, e.g. B. the PDF that is generated with the help of about a dozen files that could also be considered, or is it the “input”, a file with unspecified content with the extension .tex? What is the position of a BibTeX literature database, the content of which can be integrated into the tex file and compiled into a PDF? Each of these files testifies to something and has an externally closed structure that is tied to its format. In contrast to the file, the concept of a document allows for a technical diversity: the outer boundaries of the document are boundaries of meaning: A LaTeX document could very well include all of the files mentioned if they are in a fixed context with each other. Citations generated in the file can be determined in the same way as with a log file that contains documentation on the creation of the PDF. To demonstrate the idea with other examples: Many digital copies consist of as many image files as the original had pages and both the entire digital copy and the individual pages are referred to as documents. Similarly, the files collected from the Nuremberg Trials can be a document, as can each individual file, each individual document ... The relationship to a context of meaning is decisive. The limitation of an unspecified document to a digital document restricts the term in only one respect: It must be machine-readable. This just means that some computer must be able to open the document in a way that at least either Further processing options could result for the machine or a meaningful context could develop for humans. Otherwise, we might be dealing with a corrupt file and not a document. For the purposes of this study, which is based on a sociological approach and tries to recognize how science communicates, this comparatively open, less demanding and closely related to linguistic concept is helpful.27

For scientific publications, this document term has the consequence that several documents can be combined in a scientific publication as long as at least one of the documents or the entire aggregation has the features described above, in particular the feature of references to other scientific publications. The possibility of temporarily visualizing “causal relationships” using web technologies (Kaden 2013) is new, but merely a different form of representation for reference relationships that publications have always shown. The systematization of the collection of context information about authors such as B. is forced by means of ORCID, of course, gives these connections a new quantity, but is nothing fundamentally new for the scientific system. The global author identification system has not yet been implemented; some details of ORCID offer novel options, e.g. B. through the possibility of self-administration by the authors. There are currently no examples that the visualization of such reference contexts with an increasingly close mesh would break the publication as a closed context of meaning, as Kaden suspects, because the connections in both directions that constitute them remain unchanged in terms of communication structure, even if they are also in terms of their technical and content Design vary and thus the communicative forms of the connections in the factual and temporal, but hardly in the social dimension28 are unfamiliar. Ultimately, such applications are novel paratexts because they locate publications. Metadata has a similar function, but differs from paratexts in that the term says nothing more than its function of describing data. The term allows any perspective shifts: it refers to the descriptive data that are immediately relevant when observing a certain data cluster. A scientific publication contains metadata if its structure has been made machine-readable through text markup. A machine can then z. B. read out a title, determine how many chapters the text is divided into and recognize cited publications and list them if necessary. All of this can also be shown by examining the paratext. The decisive difference is that metadata are absolutely dependent on a (today almost entirely technical) system that can relate them to other metadata, because otherwise they lose their function and thus their definition. Previously, in libraries z. B. the card catalogs as such a system, today it is databases. Paratexts can be contextualized both intellectually and technically. The following distinction is made:

Metadata Paratext Scientific publication

The currently widely used publication formats allow only part of the paratext to be automatically recognized and processed as such in the form of metadata. These publication formats prevent machines from extracting key statements from a publication and linking and comparing them with key statements from other publications, for example in order to propose an overview of the state of research on a particular topic: a task that service staff or student assistants often undertake where they are available . The time required for this is enormous and the susceptibility to errors is high.

3 Need for a new medium of scientific communication

3.1 The need for new selection support procedures

Scientific publication is still fundamental to science communication: for the time being, it is indispensable and irreplaceable. The science system currently has no other instrument for establishing its unity. It depends on traceability in order to be able to rely on the known knowledge and to fulfill its function, namely to provide society with new knowledge.All other structures of the science system are auxiliary mechanisms that are made dependent on publication: above all reputation, positions in science organizations, funding, etc.

Today it is easy to produce a master copy that meets the highest demands on typography. The only requirements are an average high-performance home computer and the free LaTeX software, as well as computer literacy and patience. The result, mostly a PDF file, meets the requirements of most book publishers who have given up the responsibility for creating a master copy. No empirical survey is required to determine that PDF is the predominant and also the most requested format for current scientific publications. This may come as a surprise once you've tried to use it with a mobile reader. Even a PDF / A is far less suitable for long-term archiving than formats based on XML, as it does not only contain standard-coded characters and any existing structure cannot be accessed without special software. Each text mining PDFs is associated with laborious manual labor, so that despite the remarkable advances in the corresponding technologies, one inevitably has to ask the question: Wouldn't it be time, now that digital documents are being produced and received more and more frequently, but printed out less and less, about alternatives ponder?

Changes are looming for the science system that will transform it from a comparatively exclusive system into a hyper-inclusive one. This can be seen from the statistics: While the world population has not increased hyper-exponentially since the 1970s, as the growth rate has been falling since then, the same cannot be said for the proportion of researchers in this population: the global level of education is increasing; In 2009 there were around 8 million researchers in the world. Both the number of individuals involved and the number of publications has been growing by around 3.3% annually for two decades (Ware and Mabe 2009). Even stronger growth is to be expected. In contrast, population growth in 2010 was 1.1%.29 There are also developments like those of the Citizen Science, the layman z. B. involved in the collection of observation data. In astronomy and ornithology in particular, citizen participation in scientific discoveries is very traditional. Nevertheless, the phenomenon is new in its current disposition: In 2012 a first conference on the topic took place, the Conference on Public Participation in Scientific Research in Portland, Oregon. The EU-funded Socientize Project has set up a forum where you can find out more about Citizen Science projects. Participation is to be promoted, which is why appropriate recommendations have also been formulated for the EU member states. The express goal is "to conduct top research by opening the laboratory doors and involving lay scientists"30, namely via electronic infrastructures, as they have only been developed since the establishment of the Internet.31

In general, technical developments have accelerated the generation and analysis of research data and thus the production of research results enormously. It is almost surprising that, in relative terms, there is no more published than before, but the number of those publications that have to be read is still increasing constantly. It is true that the scientific system reacts with an ever finer internal differentiation into subdisciplines and only individual topics to which small, interactions become possible communities limit, but this does not completely relieve researchers of the increased number of readings, because one's own research must be contextualized in order to be perceived outside of a very limited circle and thus to fulfill the function of scientific publication. The increasing differentiation therefore also has undesirable consequences, which in turn trigger an update of concepts of interdisciplinary and transdisciplinary duality. However, current developments do not suggest that the complexity can be brought under control.

In scientific communication, everything that the applied theory and method claims to be truthful and is based on the applicable norms is potentially relevant. Selections of relevant new knowledge take place against the background of variations of the already known knowledge.32 For this central selection mechanism, supporting mechanisms, so to speak pre-filters, are required in order to be able to cope with the enormous complexity. This includes B. the division into disciplines. Luhmann also differentiates between an implicit and an explicit selection (1990, 577). The explicit selection presupposes that the contribution is discussed and thus confirmed or refuted. The implicit selection starts before that: it already excludes a large part of the communication offers, simply because they are not received for a wide variety of reasons. Various mechanisms support the implicit selection, e.g. B. the assignment to certain publication media by classic peer review or the impact factor. They usually start before publication and thus prevent an explicit selection from occurring.

However, such mechanisms are increasingly losing legitimacy, as they are e.g. B. run counter to the politically promoted claim for democratization and transparency as well as cost efficiency.33 They favor the unnoticed repetition of similar research, which can only be prevented if technically good research results do not fall victim to implicit selection. The capacity limits of print media have so far34 requires the relevance-based selection of scientific documents before publication - this requirement is no longer applicable. The selection of communication offers is extremely delicate. Today she must be able to protect herself against any suspicion of nepotism or corruption. At the moment there seems to be only one authority that is harmless in these respects because it cannot be assumed to have any intention: the machine that no single person is programming, but which works entirely on the basis of open standards from an open one community can be controlled. Machine processes only offer an advantage over the established selection mechanisms if all publications are freely accessible to humans and machines and prepared according to certain standards, i.e. are comparable. The advantage then lies in the exponentiated processing capacity and a containment of random selections. However, the enormous challenge of developing and establishing efficient procedures and standards and making them understandable and verifiable for people makes this idea less promising and its implementation unlikely at first glance. This study attempts to make the improbable more likely.

3.2 From knowledge to science representation: a sketch

The success of the Internet suggests building a publication service aimed at a global public. The opportunities that arise from interdisciplinary exchange are hardly to be underestimated. So there is a lot to be said for a science-wide service. Many of the valuable innovations that the publishing industry has experienced through the spread of the Internet and Open Access, however, were initially only related to individual journals or portals, have spread within the same discipline if they are successful and are therefore for distant disciplines due to the treatment of disciplinary ones Special needs were not very attractive. In the meantime, those needs have become much better known and can be better taken into account in the development of publication systems. But: How should one ascertain needs that can only be based on the shortcomings of the present? It seems advisable first to develop a rough concept that can then be discussed. At this early stage of development, it is much more important to identify the problems to be solved than to come up with a detailed development plan. However, this mode should now be left briefly in order to formulate the basic requirement: A system is required that supports the following two-stage process:

  1. the preparation of the content of the research results to be published or published, which initially enable machines and thus humans to contextualize the information with as little effort as possible, and

  2. the professional assessment of honesty, quality and relevance, which can take many different forms.

The two levels should ideally mutually increase with each other, in that the technical assessment flows into the content preparation as context information. Conversely, a professional assessment must first be acquired in an open publication system, which is most likely to succeed if the publication is often linked to information that has already been assessed as relevant. Several proposals for procedures for post-publication quality assurance have already been made (see e.g. Kriegeskorte (2012)). Such procedures then inevitably affect the typical business processes of publish why a new model was proposed: the overlay model (Ginsparg (1997); Smith (1999) and Priem and Hemminger (2012)), which is based on initially securing and persistent manuscripts on an open access document server archive so that service providers or other institutions can select contributions and one or more functions that are otherwise customary publisher take over, can execute non-exclusively, e.g. B. quality assurance. This study should only deal with the first stage: How should this manuscript be prepared so that it can be used as a scientific publication for selection-supporting processes, if possible without any selection reservations?

Particularly in the case of scientific publications, context information that was referred to above as paratexts can be identified comparatively easily, because it is either provided by the publication, as is the case with the basic title data and citations, or is easy to find out, e.g. B. the organizational affiliation via author identification systems or institutional websites. More and more precise processes are being developed for content indexing, which are characterized by an increasing degree of automation. At the moment, the focus is on the use of a predefined vocabulary to automatically classify and tag large quantities of documents. Such methods have the disadvantage that the most important thing about the publication can only be reproduced to a limited extent: the variation of known knowledge. According to Stock and Stock (2008, 30f.), Knowledge orders can only be used for the representation of normal science (Kuhn and Krüger 1978). Paradigm shifts in science generally follow paradigm shifts in the organization of knowledge. Wouldn't a procedure also be conceivable through which the publication, authorized by the person who publishes his research results, is connected to a freely accessible, dynamic scientific network that can represent science in real time? It is about the simultaneity of the generation of new knowledge - which, even if it has to be characterized as normal science according to Kuhn, cannot be represented by existing knowledge organization systems - and the knowledge representation, which first of all serves its development, but then also has emergent potential. Of course, predefined vocabularies are also helpful for this, but they could, instead top-down from a science observing perspective, too bottom-up be developed by science itself, as it were as a new universal scientific language. New because as a medium it should not only use the loosely coupled elements of a natural language or be limited to one discipline. Rather, the medium should be a technical vocabulary that is composed of the scientific languages ​​of all disciplines and can be expanded through individual contributions.

The information science and economy as well as the libraries and related service facilities would then have the task of providing and maintaining suitable infrastructures. One can imagine an enormously extensive encyclopedia comparatively unproblematic, which records every designation, its definition and thus its term, as it appears in the summary formulation of a research result, in a wide variety of languages. Over time, this would inevitably provide an overview of the respective conceptual histories, since the meaning of abstract concepts, especially in the humanities, is constantly up for discussion. It is then more difficult to think of a universal vocabulary of a different type, without which the formulation of such results would be impossible: On the one hand, connectors must connect the terms appropriately depending on the intended statement and, on the other hand, in a way that links statements that link one of the identifiers of these terms use with a different meaning, can be compared to e.g. B. to make conflicting statements recognizable for machines. The intellectual processing of the resulting scientific network is facilitated by comparatively stable connectors or relations. In addition, this connector vocabulary must be non-subject-specific, universal and, above all, manageable in order to enable interdisciplinary and intellectually comprehensible links. This procedure in no way excludes a hierarchical order of the terms. On the contrary: Such arrangements could facilitate the search for analogies of the terms used on the one hand in the publication and on the other hand in the scientific network and counteract the unnoticed creation of synonyms.

However, hierarchies are unsuitable for this context of use as long as they insist on exclusive positions of individual terms within the hierarchy. Such hierarchies may be appropriate for stable taxonomies in biology, but not for knowledge that is controversial. A good example are the forms of a scientific publication from the previous chapter: The “scientific publication” can be classified hierarchically below the “document” and “research result” at the same time - both and other things are just as possible. Which distinction is made depends solely on the observer. A scientific network that represents the current scientific communication well has to be able to absorb all documented observations and is therefore not independent of the observer, but it is extremely flexible. Quantitatively stronger distinctions and relations can be weighted accordingly by a machine when processing the information. It can no longer be a question of wanting to keep an overview of a knowledge system as an individual observer, as is still possible and desirable for thesauri - at least within a subject area. They tried to use association relations to absorb contingent hierarchies, but always had to choose a dominant hierarchy. This concept has to be abandoned if the aim is to represent science as comprehensively as possible and, more importantly, to support it in the media. The prerequisites for the use of technologies that make it possible to no longer have to generate knowledge orders afterwards, but rather ad hoc, during the generation of knowledge, are given today with semantic web technologies, as will be shown below.

If, with the help of a system that is as easy to use as possible, authors could be enabled to integrate their contributions into a network-like structure themselves or to authorize a service provider to do so, this would be an opportunity to deal with one of the most serious problems of information science knowledge representation: that of impossibility the lossless transferability of information between mental systems. Communication is not the solution, but the problem. In the context of the explanation of the three selections of communications it became clear: Understanding cannot mean much more than feeling addressed. This is well known in information science,35 but so far there has seemed to be a tendency to come to terms with this problem, which in fact cannot be eliminated. Of course, the same problem also exists between the author and the recipient, but at least no additional level is included, which sometimes disguises rather than mediates.

If understanding is unlikely even in interpersonal communication, why should we cling to the illusion that computers can be better mediators? This is not about communication between computers or between computers and people: only society communicates, based on the most diverse expectation structures. Social structures are Expectations. Computers do not create expectations that could be disappointed and therefore do not contribute directly to the formation of social structures. People expect something from computers.B. to increase the likelihood of communication, connectivity and thus understanding - as far as possible. How these expectations are built up would then be the really interesting question. Now, however, it will be shown in more detail how the indicated scientific network could develop on the basis of scientific publications and which approaches already exist. The appropriate keyword here seems to be semantic publishing.

3.3 Science communication and semantics

Semantics serve the self-observation of society; they are “prescriptions worth preserving” (Luhmann 1997, 887). The authority of semantics is established communicatively, their usefulness in the respective context is permanently checked. In systems theory, the concept of semantics is closely related to that of social structure (see Stäheli 1998). For example, so that research results can be discussed in a focused manner within a scientific discipline, the communication there repeatedly reproduces a special semantic, a technical language. Particularly in interaction systems, in laboratories, great time pressure does not allow an explicit definition of the terms used in each case. It is based on a shared understanding: semantics reduce the complexity of meaning that would otherwise have to be communicated. The university education is therefore not least designed to train the use of technical languages.

A technology that was successfully used by society to reduce complexity even before the invention of writing - semantics relies only on the use of symbols - could now be used as a template for a computer-aided scientific network on which scientific communication can be based can orient. This system is not called an information network because it has to go beyond the structuring of data:36 The contextualization of the information through the link with scientific communication is crucial. A general knowledge network that is oriented towards social communication without being restricted to science will have to do without the special features of a scientific network as far as possible in order to be useful, especially the constant carrying along of contingency. Areas of application for semantic technologies are currently developing in particular in business and politics, where a specific approach to knowledge has also developed. So in both areas one has to rely on accepting certain things as indisputable, e.g. B. the characteristics of a product offered or the household data of an economy. The contingencies come here z. This applies, for example, to prices that differ from provider to provider, or to party-political positions on certain budget items that differ from faction to faction. For this purpose, an ontology based on the meanwhile classic model is actually suitable. However, their use does involve risks if sufficient contingencies are not allowed, which cannot be investigated further here.

Networking of all of the named areas of application is obvious. There can be research on companies or political decisions, or political support for companies that are represented by computer. In order to map these external references, one does not rely on a connecting one upper ontology (see also Section 3.4), but only on URIs and suitable relations. Semantics depend on the social structure and this is functionally differentiated. This makes it possible to use technologies specialized in science without having to forego observation possibilities in the scientific environment. If a common ontology is needed in the scientific network, then B. to be able to differentiate between the research subject and the method across disciplines.

Technical semantics terms are used again and again even without computer support - this is what makes them semantics. The idea is therefore to facilitate this process, which is already known from completely analog communication, by using the Internet, not as an end in itself, but to support - analog - research. Another advantage is that arguing in this way makes it easy to convey what the network is for. There is no need to adopt new logic or gain an understanding of how technologies work. It is sufficient to reflect on everyday experience with scientific communication, including the importance of authorship, and to transfer it to a new medium: “bringing humanity fully into the information loop requires data structures and computational techniques that enable us to treat social expectations and legal rules as first-class objects in the new web architecture ”(Hendler and Berners-Lee 2010).

The development of semantics is then automatically recorded and the authorship of variations is verified. Since, despite the increasingly widespread use of mobile technologies, it cannot initially be assumed that people will make use of computer-aided semantics in interactions, a temporary restriction on the use of the scientific network for written communication, i.e. for research and publications, makes sense. So when a semantic publication is mentioned in the following, a scientific publication is meant, in which statements about research results are formulated using semantics, which are documented in a freely accessible publication and research system that can in turn be expanded through semantic publications. The system to be called the science network is working on this based on standardized semantic web technologies and significantly reduces the complexity and improbability of scientific communication.

The system-theoretical concept of semantics harmonizes better with that used by the Semantic Web than the reading more widespread in the humanities, where “semantics” is translated as “meaning”. David Shotton ((2009) and (2012)) developed, following the “inventor” of the Semantic Web, Tim Berners-Lee (2001), an extensional definition of the semantic publishing of scientific articles using such a definition of “semantics”: Beim Semantic publishing is therefore about “anything that enhances the meaning” (D. Shotton 2009), in detail this can be achieved through the following

  • enriching the importance of the publication, e.g. B. through the visual reinforcement of statements or the embedding in a dynamically retrieved context, even not contained in the publication,

  • their linking to publications that are similar in any way,

  • making underlying data, including images, available in a form that has the greatest possible reuse value,

  • the provision of machine-processable, extensive metadata and

  • making it easier to find them.

Not every one of these features makes sense as an enrichment of meaning, especially the last two aspects. Extensive metadata make it easier to find, but this does not mean that the publication has any greater significance, unless one understands meaning not in its factual but in its social dimension. In general, a collection of measures open to expansion seems to be unsuitable as a definition for a very general goal that can also be achieved through classical techniques of scientific writing, since it is extremely inclusive and does not seem instructive. Of the numerous enrichment techniques that Shotton (2009) demonstrated on the basis of an article that has already been published, only two can claim relevance here: The semantic mark-up of terms used in the text, which links them to information sources on the Internet, e.g. B. Specialist databases, and a semantic representation of all references used, which not only contains the bibliographic data, but also information about it, how this literature was referenced. We will return to this ontology (see Section 3.4).

Quite similar to Shotton's definition is that of the now obvious one enhanced publishing which arose from the DRIVER II project (Verhaar 2008): An enriched publication is therefore a document supplemented by sources, data, visualizations, comments, etc. (“ePrint”). The enrichment does not refer to an expanded functionality through the machine-readable representation of the text itself, but to an expansion of the information: “the fact that digital media make even more information available will only increase the problem. Digital texts, if we merely conceive of them as delimited containers that carry a certain amount of information, will not help us to solve this problem either ”(Gradmann and Meister 2008). The problem that Gradmann and Meister determine here for the social sciences and humanities ultimately also applies to science in general: semantics are dynamic and unstable. As long as scientific publications are designed as containers and are not linked to other publications in a machine-readable manner, the aforementioned enhancements are techniques that only increase the complexity of scientific communication. Additional material is added to the publications rather than reducing the complexity in any way. Only semantic methods make it easier to understand dynamics and contexts through clearer connections to terms (and thus designations and meanings), authors and discourses using URIs. The proposals from the DRIVER II project and to a large extent also those from Shotton are projects that pursue goals other than semantic publishing, as defined above.

This series also includes the DFG project “Future Publications in the Humanities (Fu-PusH)”, which was approved for two years in January 2014 and is intended to develop recommendations for the production and design of digital publications (Fu-PusH 2014). It is particularly worth mentioning here because the humanities have a need to test the usual publication formats for their functionality in a primarily digital reception environment. So far, however, the Semantic Web is apparently not a mandatory component of this environment, in particular not on a content level with regard to the networking of knowledge, but at most with regard to the need, which is meanwhile being established, at least the metadata of publications as Linked (open) data making it available on the Internet: A good opportunity for libraries to familiarize themselves with the technologies of the Semantic Web, which is currently not yet widely understood as a task. It is therefore not to be expected that the ideas presented here will be widely discussed in the near future. In any case, by this point it should have become clear that it is also worthwhile for the humanities and social sciences to observe the developments in the life sciences and to think about the use of similar technologies that at least relate scientific knowledge to a larger topic, via the annotated edition of the Going beyond the Œvres of an author, integrate so that scientific communication grows dynamically.

3.4 Overcoming the ontological worldview with ontologies?

If one understands the Semantic Web as an Internet enriched with “meaning”, criticism is obvious because it cannot fulfill the claim conveyed by its title to convey meaning. Kim Veltman (2004) considers the term to be wrong and suggests “transactions web” or “logic web”, since formal logic is the only objective dimension of importance and has therefore also found its way into the development of the semantic web. With the establishment of the OWL around 2002, one refrained from making other dimensions representable. However, this limitation is not explained. Even if elements connected by logic as a self-contained system allow valid conclusions to be drawn within the system,37 there can be no question of objectivity: both the structure of the system, its use itself and the choice and arrangement of the elements are contingent and dependent on the observer. “The idea of ​​fixed denotative meaning relationships, ignoring the contextual dependence of language in terms of Wittgenstein's language games, the untranslatability of cultural concepts [...] of static 'ontologies' - in short: ignoring the entire line of tradition from Nietzsche's language criticism to the Frankfurt School to post-structuralism - make this approach appear shortened and problematic ”(Dudek 2012). By relying on the “semantic primitives” when developing the Semantic Web38(Sowa 2000), one would overlook the crucial, so Veltman, whose position reported in the following is supported by Gradmann (2009):

  1. Instead of using the Semantic Web to represent the functional worldview that has now become established, one would continue to append Aristotle's substantialist worldview: “Everything is presented as if this is the way 'it is' ontologically, rather than providing frameworks whereby what a thing' is ', what it means, and how it relates to other things, change as the framework changes. "

  2. One would now be able and must differentiate the concept of existence more finely: Apparently, only in name or in thing.

  3. Words are not unambiguous, but their meaning can be changed depending on the application context (e.g. technical language). One should therefore differentiate between meanings, designations and their defined connection: concepts.

  4. Relationships can also be subordinate, determining or ordinal: "Needed is an approach to semantics that places it in a larger context of semiotics, lexicology, lexicography, semasiology and onomasiology."

  5. Terms are defined differently depending on the place, time, political circumstances, etc. "We need databases to reflect that meaning changes both temporally (whence etymology) and spatially, even within a culture (e. G. National, regional and local differences) and especially between cultures." But the designations of meanings can also change.

All of Veltman's demands can be supported here, but some of them can already be implemented with OWL, RDFS and SKOS and are sometimes also implemented; However, one cannot yet speak of a widespread use of these principles. In order to make the change of relations between things or concepts in specific contexts clear, it is necessary to formulate statements, e.g. B. in semantic publications that do not claim general validity, but are controversial. This cannot be achieved with ontologies and URIs alone. For a scientific network, there must be empty spaces that can be freely filled by authors on a conceptual level. Another major problem arises from the fact that the development of ontologies is not systematically versioned at the moment. Overworked labels of classes and relations are rarely documented.

The cited critics of the semantic web mainstream, however, neglect to differentiate more precisely between the concept of ontology in philosophy, which means a worldview that subordinates all other distinctions to the distinction being / not being, and ontology as a “formal, explicit specification of a shared conceptualisation ”(Gruber 1993) as an aid for systems for automatic closing. The latter term does not necessarily include the consistency with the former, even if it is often stated in this way. In purely theoretical terms, a computer could be put in a position to infer paradoxes, but to the best of my knowledge no experiment has yet been undertaken, perhaps because all current thinking is still primarily socialized by several thousand years of ontological philosophy, which has the result that paradoxes are considered to be avoided. However, philosophy itself reflects the lack of reflection in ontological observation by recognizing the tendency of ontology to produce serious errors: “Ontology therefore guarantees the unity of the world as the unity of being. Only nothing is excluded, but ‘nothing’ is lost ”(Luhmann 1997, 896). But what to do if what is observed turns out to be nothing? Only if you mark each distinction as fundamentally contingent can you protect yourself from consequential errors. All primarily hierarchical systems of knowledge representation bear this risk.

In order to reduce the susceptibility to errors, the distinction was made in ancient times episteme / doxa one in order to split off what has not been sufficiently proven, but again with the preceding ontological distinction, for both the undoubted and the supposed are. The introduction of logic and later - with extended methods - the scientific system save communication from the ontological trap by being able to denote (existing) untruths.The problem of either / or, which excludes everything third with the aim of simplification, is only invisibilized - until one realizes that the untrue cannot be after all! Falsehoods are highly informative and worth preserving, but that cannot be conveyed in science: negative results fall victim to implicit selection (Brembs, Button, and Munafò 2013), although they even have to be formulated as truth in order to be recognized as science at all become. Possibly a lot of communicative problems can be avoided if one allows the third, which is already largely unnoticed in every distinction: namely the observer.

The idea of ​​multi-valued logic is not new; it has been discussed for about 100 years. The development of the computer in particular has contributed to admitting values ​​between true and false, or to introducing the value of the (not yet) known, already proposed by Aristotle. But all of these proposals are nothing but variants of two-valued logic. The crucial point to which Gotthard Günther (e.g. 1980)