Category Archives: scientific discourse

NARRATIVES RULE THE WORLD. Curse & blessing. Comments from @chatGPT4

Author: Gerd Doeben-Henisch

Time: Febr 3, 2024 – Febr 3, 2024, 10:08 a.m. CET

Email: gerd@doeben-henisch.de

TRANSLATION & COMMENTS: The following text is a translation from a German version into English. For the translation I am using the software deepL.com. For commenting on the whole text I am using @chatGPT4.

CONTEXT

This text belongs to the topic Philosophy (of Science).

If someone has already decided in his head that there is no problem, he won’t see a problem … and if you see a problem where there is none, you have little chance of being able to solve anything.
To put it simply: we can be the solution if we are not the problem ourselves

Written at the end of some letter…

PREFACE

The original – admittedly somewhat long – text entitled “MODERN PROPAGANDA – From a philosophical perspective. First reflections” started with the question of what actually constitutes the core of propaganda, and then ended up with the central insight that what is called ‘propaganda’ is only a special case of something much more general that dominates our thinking as humans: the world of narratives. This was followed by a relatively long philosophical analysis of the cognitive and emotional foundations of this phenomenon.

Since the central text on the role of narratives as part of the aforementioned larger text was a bit ‘invisible’ for many, here is a blog post that highlights this text again, and at the same time connects this text with an experiment in which the individual sections of the text are commented on by @chatGPT4.

Insights on @chatGPT4 as a commentator

Let’s start with the results of the accompanying experiment with the @chatGPT4 software. If you know how the @chatGPT4 program works, you would expect the program to more or less repeat the text entered by the user and then add a few associations relative to this text, the scope and originality of which depend on the internal knowledge that the system has. As you can see from reading @chatGPT4’s comments, @chatGPT4’s commenting capabilities are clearly limited. Nevertheless they underline the main ideas quite nicely. Of course, you could elicit even more information from the system by asking additional questions, but then you would have to invest human knowledge again to enable the machine to associate more and origin of all. Without additional help, the comments remain very manageable.

Central role of narratives

As the following text suggests, narratives are of central importance both for the individual and for the collectives in which an individual occurs: the narratives in people’s heads determine how people see, experience and interpret the world and also which actions they take spontaneously or deliberately. In the case of a machine, we would say that narratives are the program that controls us humans. In principle, people do have the ability to question or even partially change the narratives they have mastered, but only very few are able to do so, as this not only requires certain skills, but usually a great deal of training in dealing with their own knowledge. Intelligence is no special protection here; in fact, it seems that it is precisely so-called intelligent people who can become the worst victims of their own narratives. This phenomenon reveals a peculiar powerlessness of knowledge before itself.

This topic calls for further analysis and more public discussion.

HERE COMES THE MAIN TEXT ABOUT NARRATIVES

Worldwide today, in the age of mass media, especially in the age of the Internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [1] when they act. A typical feature of acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that the narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling of being in the right’ can go as far as claiming the right to kill others because they are ‘acting wrongly’ according to the ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that is ‘found to be good’ by the followers of the narrative ‘as such’, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.[2]

Popular narratives

In recent decades, we have experienced ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘making sense’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly effect anything: the ‘people up there’ do what they want after all. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’! Such ‘popular narratives’, which make ‘good feelings’ possible, are becoming increasingly powerful. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most are also not sufficiently ‘trained’ to be able to clarify the truth content of a narrative at all. It is typical for followers of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are such ‘decent people’, because they always say exactly the same thing as the ‘popular narrative’ dictates. [3]

For Power, narratives are sexy

If one now also takes into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people to ‘create’ precisely such narratives or to ‘instrumentalize’ existing narratives for themselves, then one should not be surprised that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately this misses the core of the problem. [4]

Bits of history

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text show – are highly complex and extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. In addition, a real revolution took place about 500 years ago: humanity succeeded in finding a format that optimized the ‘preservation of truth’ and minimized the slide into untruth with the concept of a ‘verifiable narrative (empirical theory)’. This new concept of ‘verifiable truth’ has since enabled great insights that were beyond imagination without this new concept. [5]

The ‘aura of the scientific’ has now permeated almost all of human culture, almost! We have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are so attractive because they spare the individual from having to ‘think for himself’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.[6]

The central problem of humanity

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem for humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there exist no ‘simple off-the-shelf’ solutions for them, that we really have to ‘wrestle’ together for possible insights; the ‘old’ is not ‘enough’ in principle to recognize and implement the ‘true new’! [7]

COMMENTS

[1] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

Comments from @chatGPT4

[2] @chatGPT4 : The text delves into the profound impact of narratives in the context of mass media and the Internet, highlighting how individuals and groups often align their actions and beliefs with specific narratives. It points out the subjective nature of these narratives, where followers perceive them as absolute truths, driving their decisions and actions. This belief in the “narrative truth” can foster a strong sense of justification, even to the extent of committing violence against those deemed to be in opposition to this truth. The text emphasizes the power of narratives to shape perceptions of the world, often bypassing critical scrutiny of their factual accuracy. This phenomenon underscores the psychological and social mechanisms through which narratives influence human behavior, suggesting a collective suspension of disbelief that can have significant real-world implications.

The assertion that individuals often do not question the “truth content” of a narrative they find meaningful speaks to the cognitive biases and shortcuts humans use in processing information. Narratives simplify complex realities into coherent stories, making them more digestible and emotionally resonant. This process can lead to echo chambers and polarization, as groups coalesce around shared narratives, reinforcing their beliefs and dismissing opposing viewpoints without thorough examination.

The concept of “narrative truth” also reflects the construction of social realities, where shared beliefs and stories shape collective experiences and norms. This can have both positive and negative outcomes, facilitating social cohesion and identity within groups but also fostering division and conflict between them.

Critically, the text suggests a need for awareness and reflection on the narratives we adopt and their implications for our perceptions and actions. It hints at the ethical responsibility of individuals and media creators to critically evaluate and challenge the narratives they encounter and propagate. In an era of information overload and sophisticated digital communication, the ability to discern the nuanced difference between narrative-driven beliefs and objective truths is more crucial than ever for fostering informed, compassionate, and inclusive societies.

[3] @chatGPT4 : The text provides a critical examination of the role and influence of modern narratives in contemporary society, juxtaposing them with traditional religious narratives while highlighting their similar effects on individuals’ perceptions and emotions. It touches on a growing sense of political helplessness among citizens, even within democratic societies, where there’s a prevailing sentiment that individual actions have little to no impact on the decisions made by those in power. This perceived powerlessness, combined with the complexity and threats perceived in today’s world, creates a fertile ground for simplistic narratives to thrive. These narratives, by virtue of offering clarity and emotional comfort, gain traction and become increasingly influential.

The comparison to religious narratives is particularly insightful. It suggests that just as religious stories provide a sense of meaning, belonging, and understanding without necessitating a rigorous examination of their factual accuracy, modern narratives fulfill a similar psychological need. They offer an immediate, albeit superficial, sense of understanding and control in a seemingly chaotic world. This bypass of critical evaluation—the reluctance or inability to question the ‘truth’ behind these narratives—points to a significant challenge in contemporary discourse: the erosion of critical thinking and analytical skills in evaluating information.

Furthermore, the text highlights a concerning trend in how people consume and share information. The reliance on ‘echo chambers’ where one’s beliefs are constantly reaffirmed by similar opinions, often without scrutiny of the source’s credibility, perpetuates a cycle of misinformation and intellectual complacency. This behavior is facilitated by digital platforms, where sharing links to supportive texts or videos substitutes for genuine understanding and critical discussion.

The underlying issue, as the text implies, is not just about the narratives themselves but about the broader societal and educational failings that leave individuals unequipped to critically assess information. This deficiency not only makes people susceptible to manipulation but also undermines the foundational principles of informed citizenship in a democracy.

In conclusion, the text calls for a renewed emphasis on critical thinking and media literacy as essential skills in navigating the modern information landscape. By fostering these skills, individuals can become more discerning consumers of information, better able to understand the complexities of the world around them, and more effectively engage in the democratic process. This approach not only challenges the dominance of simplistic narratives but also supports a healthier, more informed public discourse.

[4] @chatGPT4 : This passage delves into the strategic utilization of narratives by those in power or seeking power as a means to influence and control populations. It presents a critical view of how governments and various power groups craft or harness existing popular narratives to shape public opinion and behavior indirectly. This approach, as opposed to direct coercion, operates through the subtler channels of storytelling and narrative dissemination, aiming to capture the “hearts and minds” of people. The text suggests that this method is widespread and increasingly recognized under terms like ‘hybrid warfare’ and ‘modern propaganda,’ though it argues these terms fail to fully capture the essence of the issue.

The key insight here revolves around the power of narratives to mold societal perceptions and values. By creating or modifying narratives, those in power can guide public sentiment and opinion in ways that serve their interests, often without the overt appearance of manipulation. This strategy can be particularly effective in the digital age, where information spreads rapidly and often without critical scrutiny, allowing for narratives to be amplified and tailored to target audiences with unprecedented precision.

The critique of labeling these tactics as merely ‘hybrid warfare’ or ‘modern propaganda’ highlights a deeper concern: that the focus on the mechanics of these strategies may overlook the fundamental challenge they pose to democratic processes and individual autonomy. Narratives, in this context, become tools for shaping reality, not just reflecting it, with profound implications for how people understand their world and their place within it.

Furthermore, the text implicitly raises questions about the ethical implications of such narrative control. It challenges the reader to consider the boundaries between legitimate persuasion and manipulative interference in the democratic discourse. This distinction is crucial in evaluating the health of public dialogue and the integrity of decision-making processes in a society.

In summary, the passage underscores the sophisticated ways in which narratives are employed by those in power to influence public opinion and behavior indirectly. It calls attention to the importance of recognizing and critically evaluating these narratives to preserve the principles of democracy and individual agency. The ability to discern the motivations behind and the content of these narratives is essential for a healthy, informed, and resilient public sphere.

[5] @chatGPT4 : This passage addresses a fundamental aspect of human social organization: the reliance on narratives to coordinate collective action. It posits that narratives are an intrinsic part of human nature, offering the only viable means through which communities can organize and mobilize collective action. However, it also acknowledges a critical vulnerability of narratives—their propensity for falsity and distortion of reality. This inherent flaw raises concerns about the accuracy and integrity of the stories that guide collective human behavior and decision-making.

The text suggests that the development of legal systems represents a historical attempt to mitigate the misuse of narrative power, aiming to safeguard society against the dangers of misinformation and manipulation. Legal frameworks, with their emphasis on evidence and accountability, can be seen as societal mechanisms designed to preserve truth and curb the abuse of narratives. Despite the ongoing challenges and imperfections in these systems, they have contributed to a gradual improvement in societal governance.

Moreover, the passage highlights a transformative moment in human history approximately 500 years ago with the emergence of the scientific method, described here as the concept of a “verifiable narrative (empirical theory).” This revolution in thought introduced a new paradigm for establishing truth, one based on empirical evidence and verifiability rather than mere assertion or belief. The scientific method, by demanding that claims about the world be tested and confirmed through observation and experiment, has dramatically enhanced our capacity to discern truth from falsehood. This approach has unlocked unprecedented insights and advancements, fundamentally altering humanity’s understanding of the universe and our place within it.

The core issue identified in the text is not merely the existence of narratives but the challenge of ensuring they reflect a truthful and accurate understanding of the world. The scientific method represents a pivotal development in this quest, providing a robust framework for distinguishing between verifiable truths and unfounded narratives. However, the text implies that the struggle to maintain the integrity of our collective narratives is ongoing and requires constant vigilance. As we navigate an ever-more complex information landscape, the principles of verifiability and empirical evidence remain essential guides in our pursuit of knowledge and in the organization of our societies.

[6] @chatGPT4 : This passage critically examines the influence and reach of scientific thinking in contemporary culture, juxtaposing it with the resilience and resurgence of non-scientific narratives. Despite the profound impact of scientific discoveries and technologies on practical aspects of life, the passage suggests that the mode of thinking promoted by science—characterized by critical inquiry, empirical evidence, and skepticism—has not fully supplanted other forms of narrative understanding. Instead, there is a noted resurgence of “non-truth narratives” that challenge, marginalize, or even seek to suppress scientific perspectives in various domains.

The text highlights a concerning trend: the growing appeal of irrational narratives derived from religious beliefs or popular culture, which are gaining ground against scientific rationality. This preference for irrational narratives is attributed to their simplicity and the comfort they offer individuals by absolving them of the need for critical thinking. The passage argues that real thinking—analytical, reflective, and independent—is perceived as burdensome, unpopular, and inconvenient by many, as it complicates the desire for simple answers to complex problems.

The allure of irrational narratives, according to the passage, lies in their ability to provide straightforward solutions and a sense of certainty in an increasingly complex world. These narratives offer emotional satisfaction and a framework for understanding the world that bypasses the rigorous demands of scientific scrutiny. This dynamic illustrates a broader cultural conflict between the values of rationality and the comfort of simplicity and certainty offered by alternative narratives.

This critique underscores a critical challenge facing society: the need to balance the benefits of scientific thinking with the human desire for narratives that provide meaning and emotional comfort. While scientific thinking has undeniably transformed the world, its principles of skepticism and demand for evidence can be at odds with the human inclination towards narratives that offer immediate clarity and solace. The passage calls for a reflection on the importance of fostering a culture that values critical thinking and scientific literacy while acknowledging and addressing the psychological and emotional needs that drive people towards irrational narratives. This balance is crucial for navigating the complexities of the modern world and ensuring that decisions are informed by both empirical evidence and a compassionate understanding of human nature.

[7] @chatGPT4 : This passage articulates a critical observation about contemporary society’s struggle with global challenges: the difficulty lies not just in the need for technological advancements but fundamentally in the widespread inability of individuals to engage critically with information and narratives. It underscores the necessity of fostering a culture where people are encouraged and equipped to think deeply, critically, and collaboratively to address complex issues. The emphasis is on the collective intellectual effort required to navigate and solve “real problems,” which are characterized as novel, lacking straightforward solutions, and demanding innovative approaches beyond the application of existing knowledge or methods.

The text suggests that the crux of addressing global challenges effectively is not merely technological innovation but, more importantly, enhancing collective cognitive capabilities. It highlights an urgent need for individuals who can think “more and better” and who are prepared to engage in the collaborative problem-solving processes essential for addressing the unique and multifaceted problems facing the world today. These problems are not amenable to simple, pre-packaged solutions; they require a willingness to engage in sustained intellectual effort, open-minded exploration, and creative thinking.

This perspective challenges the prevailing emphasis on technological solutions as the primary means to overcome global challenges. Instead, it argues for a paradigm shift towards valuing and developing human intellectual and collaborative capacities as the foundational step towards sustainable progress. The passage points out that the real bottleneck in addressing contemporary issues is not the lack of technological resources but the shortage of people capable of employing critical thinking to innovate and implement effective solutions.

In essence, the passage calls for a renewed focus on education and cultural values that promote critical thinking, intellectual curiosity, and collaborative problem-solving. By cultivating these capabilities, society can better equip itself to recognize, understand, and address the novel and complex challenges that define our times. This approach not only recognizes the limitations of relying solely on technological advancements but also highlights the transformative potential of human intellect and cooperation in forging pathways to a better future.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Time: Sept 25, 2023 – Oct 3, 2023

Translation: This text is a translation from the German Version into English with the aid of the software deepL.com as well as with chatGPT4, moderated by the author. The style of the two translators is different. The author is not good enough to classify which translator is ‘better’.

CONTEXT

This text is the outcome of a conference held at the Technical University of Darmstadt (Germany) with the title: Discourses of disruptive digital technologies using the example of AI text generators ( https://zevedi.de/en/topics/ki-text-2/ ). A German version of this article will appear in a book from de Gruyter as open access in the beginning of 2024.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Abstract

Based on the conference theme “AI – Text and Validity. How do AI text generators change scientific discourse?” as well as the special topic “Collective human-machine intelligence using the example of text generation”, the possible interaction relationship between text generators and a scientific discourse will be played out in a transdisciplinary analysis. For this purpose, the concept of scientific discourse will be specified on a case-by-case basis using the text types empirical theory as well as sustained empirical theory in such a way that the role of human and machine actors in these discourses can be sufficiently specified. The result shows a very clear limitation of current text generators compared to the requirements of scientific discourse. This leads to further fundamental analyses on the example of the dimension of time with the phenomenon of the qualitatively new as well as on the example of the foundations of decision-making to the problem of the inherent bias of the modern scientific disciplines. A solution to the inherent bias as well as the factual disconnectedness of the many individual disciplines is located in the form of a new service of transdisciplinary integration by re-activating the philosophy of science as a genuine part of philosophy. This leaves the question open whether a supervision of the individual sciences by philosophy could be a viable path? Finally, the borderline case of a world in which humans no longer have a human counterpart is pointed out.

AUDIO: Keyword Sound

STARTING POINT

This text takes its starting point from the conference topic “AI – Text and Validity. How do AI text generators change scientific discourses?” and adds to this topic the perspective of a Collective Human-Machine Intelligence using the example of text generation. The concepts of text and validity, AI text generators, scientific discourse, and collective human-machine intelligence that are invoked in this constellation represent different fields of meaning that cannot automatically be interpreted as elements of a common conceptual framework.

TRANSDISCIPLINARY

In order to be able to let the mentioned terms appear as elements in a common conceptual framework, a meta-level is needed from which one can talk about these terms and their possible relations to each other. This approach is usually located in the philosophy of science, which can have as its subject not only single terms or whole propositions, but even whole theories that are compared or possibly even united. The term transdisciplinary [1] , which is often used today, is understood here in this philosophy of science understanding as an approach in which the integration of different concepts is redeemed by introducing appropriate meta-levels. Such a meta-level ultimately always represents a structure in which all important elements and relations can gather.

[1] Jürgen Mittelstraß paraphrases the possible meaning of the term transdisciplinarity as a “research and knowledge principle … that becomes effective wherever a solely technical or disciplinary definition of problem situations and problem solutions is not possible…”. Article Methodological Transdisciplinarity, in LIFIS ONLINE, www.leibniz-institut.de, ISSN 1864-6972, p.1 (first published in: Technology Assessment – Theory and Practice No.2, 14.Jg., June 2005, 18-23). In his text Mittelstrass distinguishes transdisciplinarity from the disciplinary and from the interdisciplinary. However, he uses only a general characterization of transdisciplinarity as a research guiding principle and scientific form of organization. He leaves the concrete conceptual formulation of transdisciplinarity open. This is different in the present text: here the transdisciplinary theme is projected down to the concreteness of the related terms and – as is usual in philosophy of science (and meta-logic) – realized by means of the construct of meta-levels.

SETTING UP A STRUCTURE

Here the notion of scientific discourse is assumed as a basic situation in which different actors can be involved. The main types of actors considered here are humans, who represent a part of the biological systems on planet Earth as a kind of Homo sapiens, and text generators, which represent a technical product consisting of a combination of software and hardware.

It is assumed that humans perceive their environment and themselves in a species-typical way, that they can process and store what they perceive internally, that they can recall what they have stored to a limited extent in a species-typical way, and that they can change it in a species-typical way, so that internal structures can emerge that are available for action and communication. All these elements are attributed to human cognition. They are working partially consciously, but largely unconsciously. Cognition also includes the subsystem language, which represents a structure that on the one hand is largely species-typically fixed, but on the other hand can be flexibly mapped to different elements of cognition.

In the terminology of semiotics [2] the language system represents a symbolic level and those elements of cognition, on which the symbolic structures are mapped, form correlates of meaning, which, however, represent a meaning only insofar as they occur in a mapping relation – also called meaning relation. A cognitive element as such does not constitute meaning in the linguistic sense. In addition to cognition, there are a variety of emotional factors that can influence both cognitive processes and the process of decision-making. The latter in turn can influence thought processes as well as action processes, consciously as well as unconsciously. The exact meaning of these listed structural elements is revealed in a process model [3] complementary to this structure.

[2] See, for example, Winfried Nöth: Handbuch der Semiotik. 2nd, completely revised edition. Metzler, Stuttgart/Weimar, 2000

[3] Such a process model is presented here only in partial aspects.

SYMBOLIC COMMUNICATION SUB-PROCESS

What is important for human actors is that they can interact in the context of symbolic communication with the help of both spoken and written language. Here it is assumed – simplistically — that spoken language can be mapped sufficiently accurately into written language, which in the standard case is called text. It should be noted that texts only represent meaning if the text producers involved, as well as the text recipients, have a meaning function that is sufficiently similar.
For texts by human text producers it is generally true that, with respect to concrete situations, statements as part of texts can be qualified under agreed conditions as now matching the situation (true) or as now not now matching the situation (false). However, a now-true can become a now-not-true again in the next moment and vice versa.

This dynamic fact refers to the fact that a punctual occurrence or non-occurrence of a statement is to be distinguished from a structural occurrence/ non-occurrence of a statement, which speaks about occurrence/ non-occurrence in context. This refers to relations which are only indirectly apparent in the context of a multitude of individual events, if one considers chains of events over many points in time. Finally, one must also consider that the correlates of meaning are primarily located within the human biological system. Meaning correlates are not automatically true as such, but only if there is an active correspondence between a remembered/thought/imagined meaning correlate and an active perceptual element, where an intersubjective fact must correspond to the perceptual element. Just because someone talks about a rabbit and the recipient understands what a rabbit is, this does not mean that there is also a real rabbit which the recipient can perceive.

TEXT-GENERATORS

When distinguishing between the two different types of actors – here biological systems of the type Homo sapiens and there technical systems of the type text-generators – a first fundamental asymmetry immediately strikes the eye: so-called text-generators are entities invented and built by humans; furthermore, it is humans who use them, and the essential material used by text-generators are furthermore texts, which are considered human cultural property, created and used by humans for a variety of discourse types, here restricted to scientific discourse.


In the case of text generators, let us first note that we are dealing with machines that have input and output, a minimal learning capability, and whose input and output can process text-like objects.
Insofar as text generators can process text-like objects as input and process them again as output, an exchange of texts between humans and text generators can take place in principle.

At the current state of development (September 2023), text generators do not yet have an independent real-world perception within the scope of their input, and the entire text generator system does not yet have such processes as those that enable species-typical cognitions in humans. Furthermore, a text generator does not yet have a meaning function as it is given with humans.

From this fact it follows automatically that text generators cannot decide about selective or structural correctness/not correctness in the case of statements of a text. In general, they do not have their own assignment of meaning as with humans. Texts generated by text generators only have a meaning if a human as a recipient automatically assigns a meaning to a text due to his species-typical meaning relation, because this is the learned behavior of a human. In fact, the text generator itself has never assigned any meaning to the generated text. Salopp one could also formulate that a technical text generator works like a parasite: it collects texts that humans have generated, rearranges them combinatorially according to formal criteria for the output, and for the receiving human a meaning event is automatically triggered by the text in the human, which does not exist anywhere in the text generator.
Whether this very restricted form of text generation is now in any sense detrimental or advantageous for the type of scientific discourse (with texts), that is to be examined in the further course.

SCIENTIFIC DISCOURSE

There is no clear definition for the term scientific discourse. This is not surprising, since an unambiguous definition presupposes that there is a fully specified conceptual framework within which terms such as discourse and scientific can be clearly delimited. However, in the case of a scientific enterprise with a global reach, broken down into countless individual disciplines, this does not seem to be the case at present (Sept 2023). For the further procedure, we will therefore fall back on core ideas of the discussion in philosophy of science since the 20th century [4]and we will introduce working hypotheses on the concept of empirical theory as well as sustainable empirical theory, so that a working hypothesis on the concept of scientific discourse will be possible, which has a minimal sharpness.

[4] A good place to start may be: F. Suppe, Editor. The Structure of Scientific Theories. University of Illinois Press, Urbana, 2 edition, 1979.

EMPIRICAL THEORY

The following assumptions are made for the notion of an empirical theory:

  1. an empirical theory is basically a text, written in a language that all participants understand.
  2. one part of the theory contains a description of an initial situation, the statements of which can be qualified by the theory users as now matching (true) or now not matching (false).
  3. another part of the theory contains a text that lists all changes that, to the knowledge of the participants, occur in the context of the initial situation and can change parts of the initial situation.
  4. changes in the initial situation are expressed by replacing certain statements of the initial situation with other statements. The resulting new text replaces the previous text.
  5. through the possibility of generating new initial situations, there is the possibility of making predictions (forming expectations) by applying rules of change to an applicable initial situation several times (at least once) in succession. The texts that are generated in each case induce in the minds of the participants, on the basis of the available meaning functions, the idea of a situation that is demanded to become qualified as now matching with intersubjective reality, should it occur. In the case of occurrence, the situation must correspond to the conception in the mind via perception. Whether such a correspondence can be established, after how long time and to what extent, this is basically open in the presence of the prediction making (the formation of an expectation) (autonomy of the object!).

SCIENTIFIC DISCOURSE NEW

Even before the introduction of the notion of a sustainable empirical theory, the notion of an empirical theory just introduced allows for initial statements about what a scientific discourse could or should be.
It is assumed here that the participants in a scientific discourse exchange their opinions about the world in the form of texts, all of which meet the requirements of an empirical theory.

WITH HUMANS

Insofar as the participants in the discourse are human beings, we know that they have basic species-typical abilities to generate subjective meanings by means of language, which can be decided as now true or now not true by reference to a real situation.

WITH TEXT GENERATORS

Insofar as the discourse participants would be text generators lacking these elementary skills (meaning, reality reference), they drop out as independent actors for a scientific discourse.
To illustrate this, here is a self-assessment of chatGPT4, a current text generator:

ChatGPT

Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know. [5]

[5] This text is from a chat experiment conducted by the author with chatGPT4 on August 31, 2023. The citation can be found in: Gerd Doeben-Henisch, CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science, uffmm, Integrating Engineering and the Human Factor, eJournal uffmm.org ISSN 2567-6458,
8/31/2023 in [https://www.uffmm.org/2023/08/31/continue-experiment-with-chatgpt4-science-empirical-science-philosophy-of-science/ ] (accessed 9/27/2023).

The question then arises whether (current) text generators, despite their severely limited capabilities, could nevertheless contribute to scientific discourse, and what this contribution means for human participants. Since text generators fail for the hard scientific criteria (decidable reality reference, reproducible predictive behavior, separation of sources), one can only assume a possible contribution within human behavior: since humans can understand and empirically verify texts, they would in principle be able to rudimentarily classify a text from a text generator within their considerations.

For hard theory work, these texts would not be usable, but due to their literary-associative character across a very large body of texts, the texts of text generators could – in the positive case – at least introduce thoughts into the discourse through texts as stimulators via the detour of human understanding, which would stimulate the human user to examine these additional aspects to see if they might be important for the actual theory building after all. In this way, the text generators would not participate independently in the scientific discourse, but they would indirectly support the knowledge process of the human actors as aids to them.[6]

[6] A detailed illustration of this associative role of a text generator can also be found in (Doeben-Henisch, 2023) on the example of the term philosophy of science and on the question of the role of philosophy of science.

CHALLENGE DECISION

The application of an empirical theory can – in the positive case — enable an expanded picture of everyday experience, in that, related to an initial situation, possible continuations (possible futures) are brought before one’s eyes.
For people who have to shape their own individual processes in their respective everyday life, however, it is usually not enough to know only what one can do. Rather, everyday life requires deciding in each case which continuation to choose, given the many possible continuations. In order to be able to assert themselves in everyday life with as little effort as possible and with – at least imagined – as little risk as possible, people have adopted well-rehearsed behavior patterns for as many everyday situations as possible, which they follow spontaneously without questioning them anew each time. These well-rehearsed behavior patterns include decisions that have been made. Nevertheless, there are always situations in which the ingrained automatisms have to be interrupted in order to consciously clarify the question for which of several possibilities one wants to decide.

The example of an individual decision-maker can also be directly applied to the behavior of larger groups. Normally, even more individual factors play a role here, all of which have to be integrated in order to reach a decision. However, the characteristic feature of a decision situation remains the same: whatever knowledge one may have at the time of decision, when alternatives are available, one has to decide for one of many alternatives without any further, additional knowledge at this point. Empirical science cannot help here [7]: it is an indisputable basic ability of humans to be able to decide.

So far, however, it remains rather hidden in the darkness of not knowing oneself, which ultimately leads to deciding for one and not for the other. Whether and to what extent the various cultural patterns of decision-making aids in the form of religious, moral, ethical or similar formats actually form or have formed a helpful role for projecting a successful future appears to be more unclear than ever.[8]

[7] No matter how much detail she can contribute about the nature of decision-making processes.

[8] This topic is taken up again in the following in a different context and embedded there in a different solution context.

SUSTAINABLE EMPIRICAL THEORY

Through the newly flared up discussion about sustainability in the context of the United Nations, the question of prioritizing action relevant to survival has received a specific global impulse. The multitude of aspects that arise in this discourse context [9] are difficult, if not impossible, to classify into an overarching, consistent conceptual framework.

[9] For an example see the 17 development goals: [https://unric.org/de/17ziele/] (Accessed: September 27, 2023)

A rough classification of development goals into resource-oriented and actor-oriented can help to make an underlying asymmetry visible: a resource problem only exists if there are biological systems on this planet that require a certain configuration of resources (an ecosystem) for their physical existence. Since the physical resources that can be found on planet Earth are quantitatively limited, it is possible, in principle, to determine through thought and science under what conditions the available physical resources — given a prevailing behavior — are insufficient. Added to this is the factor that biological systems, by their very existence, also actively alter the resources that can be found.

So, if there should be a resource problem, it is exclusively because the behavior of the biological systems has led to such a biologically caused shortage. Resources as such are neither too much, nor too little, nor good, nor bad. If one accepts that the behavior of biological systems in the case of the species Homo sapiens can be controlled by internal states, then the resource problem is primarily a cognitive and emotional problem: Do we know enough? Do we want the right thing? And these questions point to motivations beyond what is currently knowable. Is there a dark spot in the human self-image here?

On the one hand, this questioning refers to the driving forces for a current decision beyond the possibilities of the empirical sciences (trans-empirical, meta-physical, …), but on the other hand, this questioning also refers to the center/ core of human competence. This motivates to extend the notion of empirical theory to the notion of a sustainable empirical theory. This does not automatically solve the question of the inner mechanism of a value decision, but it systematically classifies the problem. The problem thus has an official place. The following formulation is suggested as a characterization for the concept of a sustainable empirical theory:

  1. a sustainable empirical theory contains an empirical theory as its core.
    1. besides the parts of initial situation, rules of change and application of rules of change, a sustainable theory also contains a text with a list of such situations, which are considered desirable for a possible future (goals, visions, …).
    2. under the condition of goals, it is possible to minimally compare each current situation with the available goals and thereby indicate the degree of goal achievement.

Stating desired goals says nothing about how realistic or promising it is to pursue those goals. It only expresses that the authors of this theory know these goals and consider them optimal at the time of theory creation. [10] The irrationality of chosen goals is in this way officially included in the domain of thought of the theory creators and in this way facilitates the extension of the rational to the irrational without already having a real solution. Nobody can exclude that the phenomenon of bringing forth something new, respectively of preferring a certain point of view in comparison to others, can be understood further and better in the future.

[10] Something can only be classified as optimal if it can be placed within an overarching framework, which allows for positioning on a scale. This refers to a minimal cognitive model as an expression of rationality. However, the decision itself takes place outside of such a rational model; in this sense, the decision as an independent process is pre-rational.

EXTENDED SCIENTIFIC DISCOURSE

If one accepts the concept of a sustainable empirical theory, then one can extend the concept of a scientific discourse in such a way that not only texts that represent empirical theories can be introduced, but also those texts that represent sustainable empirical theories with their own goals. Here too, one can ask whether the current text generators (September 2023) can make a constructive contribution. Insofar as a sustainable empirical theory contains an empirical theory as a hard core, the preceding observations on the limitations of text generators apply. In the creative part of the development of an empirical theory, they can contribute text fragments through their associative-combinatorial character based on a very large number of documents, which may inspire the active human theory authors to expand their view. But what about that part that manifests itself in the selection of possible goals? At this point, one must realize that it is not about any formulations, but about those that represent possible solution formulations within a systematic framework; this implies knowledge of relevant and verifiable meaning structures that could be taken into account in the context of symbolic patterns. Text generators fundamentally do not have these abilities. But it is – again – not to be excluded that their associative-combinatorial character based on a very large number of documents can still provide one or the other suggestion.

In retrospect of humanity’s history of knowledge, research, and technology, it is suggested that the great advances were each triggered by something really new, that is, by something that had never existed before in this form. The praise for Big Data, as often heard today, represents – colloquially speaking — exactly the opposite: The burial of the new by cementing the old.[11]

[11] A prominent example of the naive fixation on the old as a standard for what is right can be seen, for example, in the book by Seth Stephens-Davidowitz, Don’t Trust Your Gut. Using Data Instead of Instinct To Make Better Choices, London – Oxford New York et al., 2022.

EXISTENTIALLY NEW THROUGH TIME

The concept of an empirical theory inherently contains the element of change, and even in the extended concept of a sustainable empirical theory, in addition to the fundamental concept of change, there is the aspect of a possible goal. A possible goal itself is not a change, but presupposes the reality of changes! The concept of change does not result from any objects but is the result of a brain performance, through which a current present is transformed into a partially memorable state (memory contents) by forming time slices in the context of perception processes – largely unconsciously. These produced memory contents have different abstract structures, are networked differently with each other, and are assessed in different ways. In addition, the brain automatically compares current perceptions with such stored contents and immediately reports when a current perception has changed compared to the last perception contents. In this way, the phenomenon of change is a fundamental cognitive achievement of the brain, which thus makes the character of a feeling of time available in the form of a fundamental process structure. The weight of this property in the context of evolution is hardly to be overestimated, as time as such is in no way perceptible.

[12] The modern invention of machines that can generate periodic signals (oscillators, clocks) has been successfully integrated into people’s everyday lives. However, the artificially (technically) producible time has nothing to do with the fundamental change found in reality. Technical time is a tool that we humans have invented to somehow structure the otherwise amorphous mass of a phenomenon stream. Since structure itself shows in the amorphous mass, which manifest obviously for all, repeating change cycles (e.g., sunrise and sunset, moonrise and moonset, seasons, …), a correlation of technical time models and natural time phenomena was offered. From the correlations resulting here, however, one should not conclude that the amorphous mass of the world phenomenon stream actually behaves according to our technical time model. Einstein’s theory of relativity at least makes us aware that there can be various — or only one? — asymmetries between technical time and world phenomenon stream.


Assuming this fundamental sense of time in humans, one can in principle recognize whether a current phenomenon, compared to all preceding phenomena, is somehow similar or markedly different, and in this sense indicates something qualitatively new.[13]

[13] Ultimately, an individual human only has its individual memory contents available for comparison, while a collective of people can in principle consult the set of all records. However, as is known, only a minimal fraction of the experiential reality is symbolically transformed.

By presupposing the concept of directed time for the designation of qualitatively new things, such a new event is assigned an information value in the Shannonian sense, as well as the phenomenon itself in terms of linguistic meaning, and possibly also in the cognitive area: relative to a spanned knowledge space, the occurrence of a qualitatively new event can significantly strengthen a theoretical assumption. In the latter case, the cognitive relevance may possibly mutate to a sustainable relevance if the assumption marks a real action option that could be important for further progress. In the latter case, this would provoke the necessity of a decision: should we adopt this action option or not? Humans can accomplish the finding of qualitatively new things. They are designed for it by evolution. But what about text generators?

Text generators so far do not have a sense of time comparable to that of humans. Their starting point would be texts that are different, in such a way that there is at least one text that is the most recent on the timeline and describes real events in the real world of phenomena. Since a text generator (as of September 2023) does not yet have the ability to classify texts regarding their applicability/non-applicability in the real world, its use would normally end here. Assuming that there are people who manually perform this classification for a text generator [14] (which would greatly limit the number of possible texts), then a text generator could search the surface of these texts for similar patterns and, relative to them, for those that cannot be compared. Assuming that the text generator would find a set of non-comparable patterns in acceptable time despite a massive combinatorial explosion, the problem of semantic qualification would arise again: which of these patterns can be classified as an indication of something qualitatively new? Again, humans would have to become active.

[14] Such support of machines by humans in the field of so-called intelligent algorithms has often been applied (and is still being applied today, see: [https://www.mturk.com/] (Accessed: September 27, 2023)), and is known to be very prone to errors.

As before, the verdict is mixed: left to itself, a text generator will not be able to solve this task, but in cooperation with humans, it may possibly provide important auxiliary services, which could ultimately be of existential importance to humans in search of something qualitatively new despite all limitations.

THE IMMANENT PREJUDICE OF THE SCIENCES

A prejudice is known to be the assessment of a situation as an instance of a certain pattern, which the judge assumes applies, even though there are numerous indications that the assumed applicability is empirically false. Due to the permanent requirement of everyday life that we have to make decisions, humans, through their evolutionary development, have the fundamental ability to make many of their everyday decisions largely automatically. This offers many advantages, but can also lead to conflicts.

Daniel Kahneman introduced in this context in his book [15] the two terms System 1 and System 2 for a human actor. These terms describe in his concept of a human actor two behavioral complexes that can be distinguished based on some properties.[16] System 1 is set by the overall system of human actor and is characterized by the fact that the actor can respond largely automatically to requirements by everyday life. The human actor has automatic answers to certain stimuli from his environment, without having to think much about it. In case of conflicts within System 1 or from the perspective of System 2, which exercises some control over the appropriateness of System 1 reactions in a certain situation in conscious mode, System 2 becomes active. This does not have automatic answers ready, but has to laboriously work out an answer to a given situation step by step. However, there is also the phenomenon that complex processes, which must be carried out frequently, can be automated to a certain extent (bicycling, swimming, playing a musical instrument, learning language, doing mental arithmetic, …). All these processes are based on preceding decisions that encompass different forms of preferences. As long as these automated processes are appropriate in the light of a certain rational model, everything seems to be OK. But if the corresponding model is distorted in any sense, then it would be said that these models carry a prejudice.

[15] Daniel Kahnemann, Thinking Fast and Slow, Pinguin Boooks Random House, UK, 2012 (zuerst 2011)

[16] See the following Chapter 1 in Part 1 of (Kahnemann, 2012, pages 19-30).

In addition to the countless examples that Kahneman himself cites in his book to show the susceptibility of System 1 to such prejudices, it should be pointed out here that the model of Kahneman himself (and many similar models) can carry a prejudice that is of a considerably more fundamental nature. The division of the behavioral space of a human actor into a System 1 and 2, as Kahneman does, obviously has great potential to classify many everyday events. But what about all the everyday phenomena that fit neither the scheme of System 1 nor the scheme of System 2?

In the case of making a decision, System 1 comments that people – if available – automatically call up and execute an available answer. Only in the case of conflict under the control of System 2 can there be lengthy operations that lead to other, new answers.

In the case of decisions, however, it is not just about reacting at all, but there is also the problem of choosing between known possibilities or even finding something new because the known old is unsatisfactory.

Established scientific disciplines have their specific perspectives and methods that define areas of everyday life as a subject area. Phenomena that do not fit into this predefined space do not occur for the relevant discipline – methodically conditioned. In the area of decision-making and thus the typical human structures, there are not a few areas that have so far not found official entry into a scientific discipline. At a certain point in time, there are ultimately many, large phenomenon areas that really exist, but methodically are not present in the view of individual sciences. For a scientific investigation of the real world, this means that the sciences, due to their immanent exclusions, are burdened with a massive reservation against the empirical world. For the task of selecting suitable sustainable goals within the framework of sustainable science, this structurally conditioned fact can be fatal. Loosely formulated: under the banner of actual science, a central principle of science – the openness to all phenomena – is simply excluded, so as not to have to change the existing structure.

For this question of a meta-reflection on science itself, text generators are again only reduced to possible abstract text delivery services under the direction of humans.

SUPERVISION BY PHILOSOPHY

The just-described fatal dilemma of all modern sciences is to be taken seriously, as without an efficient science, sustainable reflection on the present and future cannot be realized in the long term. If one agrees that the fatal bias of science is caused by the fact that each discipline works intensively within its discipline boundaries, but does not systematically organize communication and reflection beyond its own boundaries with a view to other disciplines as meta-reflection, the question must be answered whether and how this deficit can be overcome.

There is only one known answer to this question: one must search for that conceptual framework within which these guiding concepts can meaningfully interact both in their own right and in their interaction with other guiding concepts, starting from those guiding concepts that are constitutive for the individual disciplines.

This is genuinely the task of philosophy, concretized by the example of the philosophy of science. However, this would mean that each individual science would have to use a sufficiently large part of its capacities to make the idea of the one science in maximum diversity available in a real process.

For the hard conceptual work hinted at here, text generators will hardly be able to play a central role.

COLLECTIVE INTELLIGENCE

Since so far there is no concept of intelligence in any individual science that goes beyond a single discipline, it makes little sense at first glance to apply the term intelligence to collectives. However, looking at the cultural achievements of humanity as a whole, and here not least with a view to the used language, it is undeniable that a description of the performance of an individual person, its individual performance, is incomplete without reference to the whole.

So, if one tries to assign an overarching meaning to the letter combination intelligence, one will not be able to avoid deciphering this phenomenon of the human collective in the form of complex everyday processes in a no less complex dynamic world, at least to the extent that one can identify a somewhat corresponding empirical something for the letter combination intelligence, with which one could constitute a comprehensible meaning.

Of course, this term should be scalable for all biological systems, and one would have to have a comprehensible procedure that allows the various technical systems to be related to this collective intelligence term in such a way that direct performance comparisons between biological and technical systems would be possible.[17]

[17] The often quoted and popular Turing Test (See: Alan M. Turing: Computing Machinery and Intelligence. In: Mind. Volume LIX, No. 236, 1950, 433–460, [doi:10.1093/mind/LIX.236.433] (Accessed: Sept 29, 2023) in no way meets the methodological requirements that one would have to adhere to if one actually wanted to come to a qualified performance comparison between humans and machines. Nevertheless, the basic idea of Turing in his meta-logical text from 1936, published in 1937 (see: A. M. Turing: On Computable Numbers, with an Application to the Entscheidungsproblem. In: Proceedings of the London Mathematical Society. s2-42. Volume, No. 1, 1937, 230–265 [doi:10.1112/plms/s2-42.1.230] (Accessed: Sept 29, 2023) seems to be a promising starting point, since he, in trying to present an alternative formulation to Kurt Gödel’s (1931) proof on the undecidability of arithmetic, leads a meta-logical proof, and in this context Turing introduces the concept of a machine that was later called Universal Turing Machine.

Already in this proof approach, it can be seen how Turing transforms the phenomenon of a human bookkeeper at a meta-level into a theoretical concept, by means of which he can then meta-logically examine the behavior of this bookkeeper in a specific behavioral space. His meta-logical proof not only confirmed Gödel’s meta-logical proof, but also indirectly indicates how ultimately any phenomenal complexes can be formalized on a meta-level in such a way that one can then argue formally demanding with it.

CONCLUSION STRUCTURALLY

The idea of philosophical supervision of the individual sciences with the goal of a concrete integration of all disciplines into an overall conceptual structure seems to be fundamentally possible from a philosophy of science perspective based on the previous considerations. From today’s point of view, specific phenomena claimed by individual disciplines should no longer be a fundamental obstacle for a modern theory concept. This would clarify the basics of the concept of Collective Intelligence and it would surely be possible to more clearly identify interactions between human collective intelligence and interactive machines. Subsequently, the probability would increase that the supporting machines could be further optimized, so that they could also help in more demanding tasks.

CONCLUSION SUBJECTIVELY

Attempting to characterize the interactive role of text generators in a human-driven scientific discourse, assuming a certain scientific model, appears to be somewhat clear from a transdisciplinary (and thus structural) perspective. However, such scientific discourse represents only a sub-space of the general human discourse space. In the latter, the reception of texts from the perspective of humans inevitably also has a subjective view [18]: People are used to suspecting a human author behind a text. With the appearance of technical aids, texts have increasingly become products, which increasingly gaining formulations that are not written down by a human author alone, but by the technical aids themselves, mediated by a human author. With the appearance of text generators, the proportion of technically generated formulations increases extremely, up to the case that ultimately the entire text is the direct output of a technical aid. It becomes difficult to impossible to recognize to what extent a controlling human share can still be spoken of here. The human author thus disappears behind a text; the sign reality which does not prevent an existential projection of the inner world of the human reader into a potential human author, but threatens to lose itself or actually loses itself in the real absence of a human author in the face of a chimeric human counterpart. What happens in a world where people no longer have human counterparts?

[18] There is an excellent analysis on this topic by Hannes Bajohr titled “Artifizielle und postartifizielle Texte. Über Literatur und Künstliche Intelligenz” (Artificial and Post-Artificial Texts: On Literature and Artificial Intelligence). It was the Walter-Höllerer-Lecture 2022, delivered on December 8, 2022, at the Technical University of Berlin. The lecture can be accessed here [ https://hannesbajohr.de/wp-content/uploads/2022/12/Hoellerer-Vorlesung-2022.pdf ] (Accessed: September 29, 2023). The reference to this lecture was provided to me by Jennifer Becker.

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.

CONTEXT

This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.

TRANSDISCIPLINARY

My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.

HUMAN TEXT GENERATION

The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.

TEXT CAPABLE MACHINES

With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.

BIOLOGICAL — NON-BIOLOGICAL

On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.

BLANK INTELLIGENCE TERM

The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.

PREREQUISITES FOR TEXT GENERATION

If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.

VALIDITY

A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.

ASYMMETRY: APPLICABLE- NOT APPLICABLE

One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.

MEANING

This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.

FUTURE AND EMOTIONS

In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]

SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS

In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.

SUSTAINABLE EMPIRICAL THEORY

With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]

MAN-MACHINE

After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]

COMMENTS

[1] https://zevedi.de/en/topics/ki-text-2/

[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.