All posts by Gerd Doeben-Henisch

ChatGPT4: How do you define the concept ’empirical theory’?

Last Change: July 16, 2024

Inquirer: Gerd Doeben-Henisch

Email: info@uffmm.org

CONTEXT

This post is part of the Dialogues with chatGPT4.

How do you define the concept ’empirical theory’?

SUMMARY

At the end of the dialogue the inquirer stated: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

chatGPT4: Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

Protocol of the Dialogue

QUESTION: How do you define the concept ’empirical theory’?

The concept of an “empirical theory” generally refers to a theory that is grounded in and supported by observations and data derived from real-world experiences, experiments, or measurements, rather than purely theoretical or abstract reasoning. Here are some key aspects that define an empirical theory:

1. **Data-Driven**: An empirical theory is based on data collected through observations, experiments, or other forms of empirical research. The theory must be testable and measurable against the natural world.

2. **Predictive Power**: It typically provides a framework that not only explains current observations but also predicts future occurrences or behaviors. This predictive aspect is crucial for the validation and utility of the theory.

3. **Falsifiability**: A core principle in empirical theories is that they must be falsifiable. This means there must be some conceivable observation or experiment that could refute the theory if it were not true, making it subject to potential disproof.

4. **Replicability**: The results that support an empirical theory should be replicable by independent studies. This is important to ensure that the theory is robust and not the result of experimental errors or biases.

5. **Revision and Refinement**: Empirical theories are not static; they evolve over time as new data become available and as old assumptions are tested. Theories may be refined or discarded depending on how well they continue to match observations.

Empirical theories are fundamental in sciences such as physics, chemistry, biology, and social sciences, where theories must be grounded in observable phenomena and subject to experimental validation.

QUESTION: You mention as one key property of an empirical theory ‘Predictive Power’. What are the Formalisms within a theory to enable ‘predictive power’?

The concept of “predictive power” in an empirical theory is central to its utility and credibility. Predictive power allows a theory to forecast future events or phenomena based on existing data and principles. Several formalisms within a theory enable this predictive capability:

1. **Mathematical Models**: Many scientific theories rely on mathematical models that describe relationships between variables quantitatively. These models often take the form of equations that can be used to make predictions. For example, Newton’s laws of motion use differential equations to predict the future position and velocity of moving objects.

2. **Statistical Models**: These models use statisticalHow do you define the concept ’empirical theory’?

methods to infer and predict outcomes based on data. Regression analysis, for example, can predict a dependent variable based on one or more independent variables. Machine learning algorithms also fall into this category, where models are trained on historical data to predict outcomes from new data.

3. **Hypotheses and Propositions**: Theories often include specific hypotheses that are testable statements derived from general principles of the theory. These hypotheses can be used to predict outcomes in specific situations, which can then be tested through experiments or observations.

4. **Simulations**: In some fields, particularly in complex systems like climate science or economics, simulations play a crucial role in prediction. These simulations are typically run on computational models that incorporate many variables and their interactions to forecast future states.

5. **Algorithmic Rules**: Especially in theories related to computer science or systems theory, algorithmic rules that specify sequences of actions or decisions based on input conditions can predict outcomes of processes or the behavior of systems.

6. **Causal Inference**: Identifying causal relationships within data allows theories to predict the effects of changing one variable on another. This is fundamental in fields like epidemiology and economics, where understanding cause and effect is crucial for making predictions.

These formalisms are not only instrumental in predicting future events but also in confirming the validity of the theories themselves. By successfully predicting outcomes, theories validate their underlying assumptions and enhance their credibility and acceptance in the scientific community.

QUESTION. You mention about six different formalisms to enable predictions within an empirical theory. But these six formalisms are as such formally not compatible. Does this mean that we are practicing six different kinds of theories which are ‘incompatible’?

The presence of various formalisms within empirical theories, each suitable for different types of data, phenomena, or disciplines, does not necessarily imply that these theories are “incompatible” in the broader sense of scientific inquiry. Instead, it reflects the diverse nature of the world and the complexity of phenomena that science seeks to understand and predict. Here‚Äôs how these differences can be viewed:

1. **Complementarity**: Different formalisms often address different aspects of phenomena or work best under different conditions. For instance, statistical models may be best for systems where direct causal relationships are difficult to measure due to noise or complexity, while causal inference is preferred in situations where the relationship between variables is clear and measurable. These approaches are complementary rather than incompatible.

2. **Interdisciplinary Integration**: In many fields, insights from multiple formalisms are integrated to form a more comprehensive understanding. For example, in climate science, mathematical models of physical processes are combined with statistical models derived from historical data to make more accurate predictions about future climate conditions.

3. **Scalability and Applicability**: Some formalisms are more applicable at certain scales or in certain contexts. For instance, algorithmic rules might be most applicable in computer science and certain areas of engineering, while mathematical models are often used in physics and chemistry across various scales.

4. **Evolution of Theories**: Theories can evolve over time as new tools and formalisms are developed. This evolution can lead to the merging of previously distinct approaches or the refinement of existing theories to incorporate new findings.

5. **Methodological Pluralism**: The diversity of formalisms is a strength that allows researchers to tackle complex questions from multiple angles. Methodological pluralism acknowledges that no single approach can capture all aspects of complex realities, and different methods can provide different insights that are valid within their own scopes.

In conclusion, rather than viewing different formalisms as leading to incompatible theories, it’s more productive to see them as different tools in a toolkit, each useful for certain types of problems. The integration of these tools across disciplines and their ongoing development is part of what drives scientific progress.

QUESTION: That ‘different formalisms’ can be seen as ‘each useful for certain types of problems’ is OK. But this answers not the basic question, how to ‘integrate different formalisms’. Do you know any kind of strategy which solves the problem on a general formal level?

Integrating different formalisms from diverse scientific disciplines is a complex challenge that often requires a multi-faceted approach. Here are some strategies that can help address this issue on a more formal level:

### 1. **Unified Modeling Frameworks**

One approach to integrating different formalisms is to use unified modeling frameworks that can accommodate various types of models. For example, systems theory and network theory provide frameworks that can model physical, biological, social, and information systems. These frameworks allow different types of data and relationships to be incorporated into a cohesive model. Examples include agent-based modeling and system dynamics, which can integrate both qualitative and quantitative data.

### 2. **Meta-Analysis and Systematic Reviews**

In areas like medicine and social sciences, meta-analysis and systematic reviews are used to synthesize results from multiple studies that may use different methodologies and formalisms. This approach provides a way to quantitatively combine data from different sources to arrive at more general conclusions.

### 3. **Interdisciplinary Collaboration**

Active collaboration across disciplines can facilitate the integration of different formalisms by leveraging the expertise of specialists in each area. This might involve joint research projects, interdisciplinary teams, and cross-disciplinary training programs that encourage a deeper understanding and adaptation of diverse methodologies.

### 4. **Mathematical and Statistical Bridging Methods**

Techniques such as Bayesian statistics, Monte Carlo methods, and other computational approaches can serve as bridges between different formalisms. These methods allow for the combination of data and predictions from different types of models, even when these models are based on different foundational assumptions.

### 5. **Semantic Integration**

This involves developing a common vocabulary and set of concepts that can be shared across disciplines. Ontologies and semantic web technologies are useful in this context, as they help define relationships between different concepts and facilitate data integration across diverse datasets.

### 6. **Hybrid Models**

Creating hybrid models that incorporate elements from different formalisms can be effective, especially in engineering and environmental science. For example, combining physical laws with empirical data within a computational model can yield predictions that are more accurate and robust than those obtained from a single formalism.

### 7. **Philosophical and Methodological Reconciliation**

Addressing underlying philosophical and methodological differences is crucial for integrating formalisms. This might involve developing new methodologies that acknowledge and bridge epistemological differences, such as combining qualitative and quantitative research methods.

### Conclusion

While integrating different formalisms poses significant challenges, these strategies offer pathways to build more comprehensive and universally applicable theories. The process is inherently iterative and requires ongoing dialogue between different scientific domains. This integration is essential for tackling complex, multifaceted problems that no single discipline can solve alone.

QUESTION: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

The complexity of integrating formalisms arises from several factors:

1. **Epistemological Differences**: Different disciplines often operate under fundamentally different assumptions and theories about the nature of knowledge and reality. These differences can make it challenging to create a common framework that is accepted and useful across disciplines.

2. **Methodological Diversity**: Each field has developed its methodologies and tools that are optimized for its specific types of data and problems. Bridging these methodological gaps requires not just technical solutions but often a rethinking of foundational principles.

3. **Data Incompatibility**: Diverse types of data, from quantitative measurements to qualitative observations, can be difficult to integrate due to differences in scale, units, and interpretations.

4. **Communication Barriers**: Effective interdisciplinary collaboration, which is crucial for integrating formalisms, is often hindered by differences in terminology, priorities, and research cultures.

The strategies I mentioned (unified modeling frameworks, interdisciplinary collaboration, hybrid models, etc.) represent efforts to address these challenges, but they are more about mitigating the difficulties rather than completely solving them. Progress is being made, especially with the advent of advanced computational techniques and a growing emphasis on interdisciplinary research, but fully integrating diverse scientific formalisms into a coherent, universally applicable framework remains an aspirational goal rather than a current reality.

Thus, while significant strides can be and have been made in specific contexts or projects, the broader challenge persists and is a dynamic field of academic inquiry and practical application.

ANSWER of INQUIRER: Thanks for this dialogue ūüôā

You’re welcome! I’m glad we could discuss this topic together. If you have any more questions or need further clarification on anything else, feel free to ask. Happy thinking! ūüėä

Self-destruction is an option. Note

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 14, 2024, Last change: July 14, 2024)

Starting Point

We are still here on this planet. However, the fact that our ‚Äėhuman system‚Äô largely programs itself without our ‚Äėvoluntary participation‚Äô can become our downfall within the framework of a democracy and in the age of the internet; not necessarily, but highly likely‚Ķ

Self-programming

Unlike today’s machines, all living beings ‚ÄĒ including humans ‚ÄĒ are designed to ‘self-program’: whatever we perceive of ourselves, others, and the environment, is automatically transported into our interior and there it is also largely automatically structured, arranged, evaluated, and much more. No one can resist it. One can only control this ‘self-programming’ by shaping one’s environment in such a way that certain perceptions are less likely to occur, while others occur more. Educational processes assume this capacity for self-programming and they also decide what should happen during the education.

Dictatorship of the ‘Is There’

What has arrived within us is there for the time being. It forms our primary reality. When we want to act, we first rely on what is there. What is there is somehow ‘true’ for us, shaping our further perception and understanding. Something ‘around us’ that is ‘different’ is indeed ‘different’ and ‘does not fit’ with our inner truth.

Which points of comparison?

Suppose the majority of what is ‘inside us’, what we initially assume to be ‘true’, were ‘false’, ‘inappropriate’, ‘inaccurate’, etc., in the real world outside, we would have little chance of recognizing our own ‘untruth’ as long as most people around us share the same ‘untruth’.[1] Recognizing ‘untruth’ presupposes that one somehow has ‘examples of truth’ that are suitable for being ‘compared’ with the prevailing untruth. However, the presence of examples of truth does not guarantee the recognition of untruth, but only increases the likelihood that it might happen.[2]

[1] Throughout the known history of humanity, we can observe how certain ‘untruths’ were able to dominate entire peoples, not only in autocratic systems.

[2] Systems with state-promoted untruth can be identified, among other things, by the suppression of diversity, examples of truth, and the allowance of only certain forms of opinions.

Modern Untruths

Unlike autocratic systems, democratic systems officially have ‘freedom of speech’, which allows for great diversity. In democratic systems of the Democracy 1.0 format, it is assumed that this guaranteed freedom is not abused.[1]

With the advent of modern media, especially media in conjunction with the internet, it is possible to make money on a large scale with the distribution of media. The temptation is near to spread media over the internet in such a way that one can earn the maximum amount of money. A popular method is ‘advertising’: the longer and more often a user stays in front of content, the more advertising revenue flows. The temptation is great enough to offer the user only what automatically arouses his ‘automatic interest’. The fact that ‘automatic interest’ is a very strong motive and specifically correlates with content that does not require much thought is confirmed daily. It is now known and increasingly described that large parts of a population can be ‘specially programmed’ in this way.

In the ‘struggle of power-driven systems’, this possibility of external programming of people using the internet is massively exploited in the so-called ‘hybrid warfare’. While autocratic systems are ‘massively closed’, the modern democracies in the 1.0 format are almost an Eldorado for the application of hybrid warfare methods. Similar to the money-obsessed media industry, hybrid warfare also uses ‘light content’, mixing fragments of ‘true’ with fragments of ‘false’, particularly those that easily excite, and in a short time, the ‘flock of believers’ in these messages grows.[2] The ‘congregation of these propaganda believers’ cannot usually be influenced by ‘arguments’. These convictions are programmed in such a way that all sources that could represent critical alternatives are ‘outlawed’ from the start.[3]

And unfortunately, it is true that Democracies in the 1.0 format appear ‘weak’ and ‘helpless’ so far for this type of freedom use, although slowly the recognition is increasing that there is something like an abuse through ‘false programming of people’.

[1] Preventing systematic abuse of freedom in Democracies 1.0 is difficult to impossible without changing freedom of speech itself.

[2] The broad coverage of this propaganda can be easily recognized when one talks to people in different places in Germany (and abroad!) who do not know each other, but who tell more or less the same stories in a tone of conviction during the course of the conversation. Many (most of them?) even have higher education. This raises the question of how little an academic education apparently promotes a ‘critical spirit’.

[3] A popular term is ‘lying press’. Anything that could become ‘dangerous’ is ‘a lie’, although those who talk about lying press do not seriously engage with this press at all.

What is the probability of survival of truth?

Anyone who has ever delved deeply into the question of ‘truth’ in their life, and who knows that it takes various research, investigations, considerations, and even experiments to ‘look behind the apparent phenomena’, along with much communication with other people, sometimes in other languages, knows that truth is not automatic; truth does not just happen; truth cannot be obtained for ‘free’. The use of truth for beneficial technologies, new forms of agriculture, new transportation systems, etc., appears desirable in retrospect, at the end of a long journey, when it somehow becomes ‘obvious’ what all this is good for, but at the beginning of the path, this is almost unrecognizable to everyone. The pitiable state of the education system in many countries is a telling testament to the low regard for education as a training process for truth.

Given the rapid spread of unscrupulous internet media businesses accompanied by a worldwide surge in hybrid warfare, the survival probability of truth seems to be decreasing. Democracies, as the actual ‘bastions of truth’, are experiencing themselves as a place of accelerated ‘evaporation of truth’.

Collective Knowledge: Generative AI in chatbot format as a helper

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 10, 2024, Last change: July 10, 2024)

Starting Point

As the texts of the book will gradually show, the term ‘collective knowledge’ represents a crucial keyword for a characteristic that deeply defines humans‚ÄĒthe life form of ‘homo sapiens.’ For an individual, ‘collective knowledge’ is directly hardly perceivable, but without this collective knowledge, no single human would have any knowledge at all. Yes, this not only sounds like a paradox, it is a paradox. While a ‘contradiction’ between two different statements represents a factual incompatibility, a ‘paradox’ also conveys the impression of a ‘contradiction,’ but in fact, in terms of the matter, it is not an ‘incompatibility.’ The ‘knowledge of us individuals’ is real knowledge, but due to the finiteness of our bodies, our perception system, our memory, we can factually only gather a very small amount of knowledge ‘in us.’ However, the more people there are, the more ‘knowledge’ each person ‘produces’ daily‚ÄĒjust analogously, or then also digitally‚ÄĒthe greater grows the amount of knowledge that we humans ‘deposit’ in our world. Newspapers, books, libraries, databases can collect and sort this knowledge to a limited extent. But an individual can only find and ‘process’ small fractions of this ‘collected knowledge.’ The gap between the ‘available collected knowledge’ and the ‘individually processable knowledge’ is constantly growing. In such a situation, the availability of generative artificial intelligence in the format of chatbots (GAI-ChaBo) is almost an ‘evolutionary event’! This new technology does not solve all questions, but it can help the individual to ‘principally’ get a novel direct access to what we should call ‘collective knowledge of humanity.’

Before the digitalization of the world …

“Before the digitalization of the world, it was indeed laborious to convey thoughts and knowledge in a way that others could become aware of it: initially only through oral traditions, inscriptions in rocks, then parchment and papyrus, stones and clay with inscriptions. With the availability of paper, writing became easier (though there was the problem of durability); this led to the collection of texts, to books, and the first libraries with books (libraries existed even for cuneiform and clay tablets). Great libraries like the ‘Library of Alexandria’ became precious ‘collection points of knowledge,’ but they were also subjected to various destructive events during their existence, which could lead to great losses of recorded knowledge.

A ‘mechanized production of books’ has been around since the 8th century, and modern book printing began in the 15th century. The development of libraries, however, progressed slowly for a long time, often only on a private basis. It was not until the 19th century that there was a stronger development of the library system, now including public libraries.

Despite this development, it remained difficult for an individual to access knowledge through a library, and even if this (usually privileged) access existed, the availability of specific texts, their inspection, the making of notes‚ÄĒor later copies‚ÄĒwas cumbersome and time-consuming. The access of the individual reader resembled small ‘sampling’ that even within the framework of scientific work remained very limited over the years. The language problem should not be overlooked: the proportion of ‘foreign-language books’ in the library of a country A was predominantly restricted to texts in the language of country A.

‘Acquisition of knowledge’ was therefore laborious, time-consuming, and very fragmented for an individual.

An increasingly important alternative to this hard-to-access field of library knowledge were modern magazines, journals, in many languages, with ever shorter ‘knowledge cycles.’ However, the more such journals there are, the more the natural individual limitations come into force, painfully felt in the face of the swelling journal knowledge. Currently (2024), it is hardly possible to estimate the exact number of scientific journals. In the field of computer science alone, approximately 2,000 journals are estimated with an average of about 25,000 (or more) articles per year. And scientific journals only in Chinese are stated to be over 10,000.[1]

[1] For more detailed information on the collection of Chinese journals, you can visit the East View page on China Academic Journals (CAJ) here.

With digitalization

Since the availability of the World Wide Web (WWW) in the 1990s, a unified information space has emerged that has continued to spread globally. Although we are currently witnessing an increasing ‘isolation’ of the WWW among countries, the development of a common information space is unstoppable.

Alongside this information space, technologies for ‘collecting,’ ‘storing,’ ‘retrieving,’ and ‘analyzing’ data have also evolved, making it increasingly possible to find ‘answers’ to ‘questions’ from ever more sources.

With the advent of so-called ‘Generative Artificial Intelligence in the format of Chatbots’ (GAI-ChaBo) since 2022, this ‘data utilization technology’ has reached a level that not only finds ‘raw data’ but also allows an individual user with their limited knowledge direct access to ‘collective human knowledge,’ provided it has been digitized.

For the ‘evolution of life on this planet,’ this availability of collective knowledge to the individual may be the most significant event since the appearance of Homo sapiens itself about 300,000 years ago. Why?

The next level?

The sustainability debate over the last approximately 50 years has contributed to the realization that, alongside the rather individual perspective of life, and strong regional or national interests and perspectives of success, there have gradually come into consciousness ‚ÄĒ not universally ‚ÄĒ perspectives that suggest ‚ÄĒ and are now substantiated by diverse data and models ‚ÄĒ that there are problems which exceed the event horizon of individual particular groups ‚ÄĒ and these can be entire nations. Many first think of ‘resources’ that are becoming scarce (e.g., fish stocks), or being polluted (world’s oceans), or dangerously reduced (forest systems, raw materials, life forms, etc.) or more. What has hardly been discussed so far, although it should be the most important topic, is the factor that produces all these problems: Homo sapiens himself, who by his behavior, yes, even just by his sheer numbers, causes all the known ‘problems.’ And this does not happen ‘automatically,’ but because the behavior of Homo sapiens on this planet is ‘controlled’ by his ‘inner states’ in such a way that he seems ‘incapable’ of changing his behavior because he does not have his ‘inner states’ under control.

These inner states, roughly considered, consists of needs, different emotions, and collected experiences linked with knowledge. Knowledge provides the ‘images’ of oneself, others, and the world as a Homo sapiens sees it. Needs and emotions can block, ‘bind,’ or change knowledge. The knowledge that is currently available, however, has great power: ultimately, a Homo sapiens can only do what his current knowledge tells him ‚ÄĒ if he listens to his knowledge and not to ‘others’ who mean something to him due to the life situation.

If one is now interested in a ‘possible future,’ or ‚ÄĒ even more specifically ‚ÄĒ in a ‘possible future that is as good as possible for as many people as possible,’ and sustainable, then the challenge arises as to how people in the situation of everyday life, a certain form of the present, can ‘mentally’ surpass this present in such a way that, despite the current present, they can ‘somehow’ think of a piece of ‘possible future.’

‘Generative Artificial Intelligence in the format of chatbots’ (GAI-ChaBo) can help make the (approximate) entirety of past knowledge ‚ÄĒ albeit only punctually based on questions ‚ÄĒ accessible, but the ‘knowledge of the past’ provides ‘nothing new’ out of itself and ‚ÄĒ above all ‚ÄĒ, the past does not necessarily have those ‘goals’ and ‘values’ that are necessary in the current present to precisely ‘want that possible future’ that will matter.

With this challenge, Homo sapiens collides ‘with himself’ full force, and he will not be able to ‘hide behind GAI-ChaBo.’ A GAI-ChaBo always only delivers what people have previously said and done, albeit in a breadth that an individual could not achieve, but ultimately a GAI-ChaBo only functions like a kind of ‘mirror of the human collective.’ A GAI-ChaBo cannot replace humanity itself. A GAI-ChaBo is the product of collective human intelligence and can make the entirety of this collective intelligence visible in outline (an incredibly great achievement), but no more.

For the next level, Homo sapiens must somehow manage to ‘get a grip on himself’ in a completely different way than before. There are hardly any usable role models in history. What will Homo sapiens do? GAI-ChaBo is an extraordinary success, but it is not the last level. We can be curious‚Ķ

Automation of Human Tasks: Typology with Examples of ‘Writing Text’, ‘Calculating’, and ‘Planning’

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

The German version can be found HERE.

Author No. 1: Gerd Doeben-Henisch
Contact: cagent@cognitiveagent.org

(Start: June 5, 2024, Last updated: June 6, 2024)

Starting Point

In the broader spectrum of human activities, there are three ‘types’ of action patterns that are common in everyday life and crucial for shared communication and coordination: (i) writing texts, (ii) describing (calculating) quantitative relationships, and (iii) planning possible states in an assumed future. All three types have been present since the beginning of documented ‘cultural life’ of humans. The following attempts a rough typology along with known forms of implementation.

Types and their implementation formats

(a) MANUAL: We write texts ‘manually’ using writing tools and surfaces. Similarly, in quantitative matters, there is manual manipulation of objects that represent quantitative relationships. In planning, there is the problem of how to represent a ‘new’ state: if the ‘new’ is ‘already known’, one can revert to ‘images/symbols’ of the known; if it is ‘truly new’, it becomes difficult; there is no ‘automatism of the new’. How do you describe something that has never existed before? Added to this is the ‚ÄĒ often overlooked ‚ÄĒ problem that ‘objects of planning’ are usually ‘value and goal dependent’; needs, intentions, expectations, factual necessities can play a role. The latter can be ‘socially standardized’, but given the ‘radical openness of the future’, history has shown that ‘too strong standardizations’ can be a shortcut to failure.

(b) MANUAL SUPPORT: Skipping the phase of ‘partially mechanical’ support and moving on to the early phase of ‘computer support’, there are machines that can be ‘programmed’ using ‘programming languages’ so that both the writing instrument and the surface are represented by the programmed machine, which allows many additional functions (correcting texts, saving, multiple versions, automatic corrections, etc.). However, one still has to write oneself: letter by letter, word for word, etc. In ‘calculating’, writing down is still very laborious, but the ‘calculation’ then takes place partially ‘automatically’. Planning is similar to writing texts: the ‘writing down’ is supported (with all the additional functions), but ‘what’ one writes down is left to the user. Apart from ‘quantitative calculating’, a ‘projection’, a ‘prediction’ is generally not supported. An ‘evaluation’ is also not supported.

(c) LANGUAGE-BASED SUPPORT: The phase of ‘language-based support’ replaces manual input with speaking. For selected areas of texts, this is becoming increasingly successful. For ‘quantitative matters’ (calculating, mathematics, etc.), hardly at all. For planning also only very limited, where it concerns already formulated texts.

(d) ARTIFICIAL INTELLIGENCE ENVIRONMENTS: The Artificial Intelligence (AI) environment is considered here in the context of dialogue formats: The user can ask questions or send commands, and the system responds. The relevant formats of AI here are the so-called ‘generative AIs’ in the chatbot format. Under the condition of ‘existing knowledge’ of humans in the format ‘stored documents/images/‚Ķ’ and under the condition of ‘dialogue formats’ of humans (also through explicit training), these generative AIs can process questions and orders in ‘formal proximity’ to the known material in the context of a dialogue so that one does not have to intervene oneself. Corrections and changes in detail are possible. Both in ‘text creation’ and in ‘calculating’, this can function reasonably well within the realm of the ‘known’. However, the actual accuracy in the ‘real world’ is never guaranteed. In ‘planning’, the specific problem remains that for the AI, ‘truly new’ is only limited possible within the framework of combinatorial matters. The truth reservation remains, but also applies in the ‘manual case’, where the human plans themselves. The evaluation problem is also limited to already known evaluation patterns: the future is not the same as the past; the future is more or less ‘different’. Where should the necessary evaluations come from?

An interesting question remains, in what sense the advancing support by generative AIs actually supports communication and coordination among people, and specifically, whether and to what extent the central challenge of ‘future planning’ can be additionally supported by it. The fact is that we humans struggle with future planning in the areas of social life, community life, and larger contexts such as counties, states, etc., ranging from difficult to very difficult. But, for a sustainable future, successful planning seems to be indispensable.

A ‚ÄėLogic of Life‚Äô?

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 25, 2024, Last change: June 28, 2024)

Starting Point

The excerpt discusses the concept of ‚Äėcollective human intelligence (CHI)‚Äô and reflects on the foundational schema of all life : reproduction of Generation 1, birth of Generation 2, growth of Generation 2, followed by the onset of Generation 2‚Äôs behaviors accompanied by learning processes, and then reproduction of Generation 2, etc. It highlights how genetic predispositions and ‚Äėfree adapting‚Äô, commonly referred to as ‚Äėlearning‚Äô, alternate in phases. While genetic guidelines enable structures with typical functionalities that open up ‚Äėpossible action spaces‚Äô, filling these spaces is not genetically determined. This makes sense because the real ‚Äėbiological carrier system‚Äô is not isolated but exists in an ‚Äėopen environment‚Äô whose specific configuration and dynamics constantly change. From a ‚Äėsustainable survival‚Äô perspective, it is crucial that the biological carrier system has the ability to not only grasp the nuances of the environment at specific moments but also to represent, combine, and test them in the context of space and time. These simple words point to a highly complex process that has become known as ‚Äėlearning‚Äô, but the simplicity of this term may overlook the fact that we are dealing with an ‚Äėevolutionary miracle of the highest order‚Äô. The common concept of ‚Äėevolution‚Äô is too limited in this perspective; it only describes a fragment.

A ‘Logic of Life’?

Basic Pattern of All Life

The ‘basic pattern of all life’ provokes many considerations. It is striking how phases of genetic change, which imply new structures and functionality, ultimately transform the ‘initial space’ of genetic changes into new, significantly more complex spaces, not just once, but repeatedly, and the more often, the more complexity comes within reach.

The life form of ‘Homo sapiens’‚ÄĒus, who call ourselves ‘humans’‚ÄĒrepresents a provisional peak of complexity in the temporal view of history so far, but already suggests from within itself a possible ‘next evolutionary stage’.

Even viewed closely, the individual human‚ÄĒwith his structured cell galaxy, with the possible functions here, with his individual learning‚ÄĒrepresents an extraordinary event‚ÄĒrelative to the entire known universe‚ÄĒ, but this ‘individual’ human in his current state is already fully designed for a ‘plurality of people’, for ‘collective behavior’, for ‘collective learning’, and certainly also for ‘collective achievements’.

[1] The world of ‘molecules’ is transformed into the world of ‘individual cells’; the world of ‘individual cells’ is transformed into the world of ‘many cells (cellular complexes)’; the world of ‘cell complexes’ is transformed into the world of ‘structured cell complexes’, ‚Ķ, the world of structured ‘cell galaxies’ is transformed into the world of ‘cooperating structured cell galaxies with individual and collective learning’, ‚Ķ

Temporal Classification

Not only have the last few millennia shown what many people can achieve together, but particularly the ‘modern engineering achievements’ involving the collaboration of many thousands, if not tens of thousands of experts, distributed globally, over extended periods (months, year, many years), simultaneously in many different languages, dealing with highly complex materials and production processes‚ÄĒprocesses in which meta-reflection and feedback loops are taken for granted –‚Ķ These processes, which have been globally initiated since the great war in the mid-20th century, have since become more and more the everyday standard worldwide. [2] The invention of programmable machines, information networks, highly complex storage systems, and the provision of ever more ‘human-compatible interfaces’ (visual, acoustic, tactile, ‚Ķ), up to those formats that make it appear to the human user as if ‘behind the interface’ there is another living person (even if it is ‘just’ a machine), have all occurred within just about 70 years.

While it took a considerable amount of time from the first evidences of biological life on planet Earth (around -3.4 billion years ago) to the first proven appearance of Homo sapiens in North Africa (around -300,000 years ago), the development of the complex ‘mental’ and ‘communicative’ abilities of Homo sapiens starting around -300,000 years ago, was initially slow (invention of writing around -6000), but the development then accelerated significantly over the last approximately 150 years: the complex events are almost overwhelming. However, considering the entire time since the presumed formation of the entire universe about 13.7 billion years ago, there is a rough time schema:

After about 75% of the total time of the existence of the universe, the first signs of biological life.

After about 99.998% of the total time of the existence of the universe, the first signs of Homo sapiens.

After about 99.999998% of the total time of the existence of the universe, the first signs of complex collective human-technical intelligence achievements.

This means that, in relation to the total time, the periods for the ‘latest’ leaps in complexity are so ‘short’ that they can no longer be distinguished on a large scale. This can also be interpreted as ‘acceleration’. It raises the question of whether this ‘acceleration’ in the creation of increasingly complex collective intelligence achievements reveals a ‘logic of process’ that would enable further considerations?

[2] Here began the career of the modern form of ‘Systems Engineering’, a quasi-standard of problem solving, at least in the English-speaking world.

Complexity Level: Biological Cell

With the description of a ‘basic pattern of all life’, a pattern emerges that is describable at least onwards from the complexity level of a biological cell.

The complexity level preceding the biological cell is that of ‘molecules’, which can be involved in different process chains.

In the case of the biological cell, we have, among other things, the case where molecules of type 1 are used by molecules of type 2 as if the type 1 molecules were ‘strings’ that ‘represent’ molecules of type 3, which are then ‘produced’ through certain chemical processes. Put differently, there are material structures that interpret other material structures as ‘strings’, possessing a ‘meaning assignment’ that leads to the creation of new material structures.

Thus, biological cells demonstrate the use of ‘meaning assignment’, as we know structurally in the case of symbolic languages from complex cell galaxies. This is extremely astonishing: how can ‘ordinary molecules’ of type 2 have a ‘meaning assignment’ that allows them to interpret other molecules of type 1 as ‘strings’ in such a way that they‚ÄĒaccording to the meaning assignment‚ÄĒlead to the organization of other molecules of type 3, which ultimately form a structure with functional properties that cannot be derived ‘purely materially’ from the type 1 molecules.

… !! New text in preparation !!..

[3] In this context, the term ‘information’ (or ‘biological information’) is commonly used in the literature. If this usage refers to the terminology of Claude Shannon, then it would be difficult to apply, as in the specific case it is not about the transmission of ‘signal elements’ through a signal channel to ‘received signal elements’ (a structural 1-to-1 mapping), but about an assignment of ‘signs (= signal elements)’ to something ‘completely different’ than the original signal elements.

A ‘Logic’?

When the main title tentatively (‘hypothetically’) mentions a ‘Logic of Life’, it is important to clarify what specifically is meant by the term ‘logic’ as a possible concept.

The term ‘logic’ dates back to Aristotle, who introduced it around 2400 years ago in Greece. It was then translated back into the Latin of the Christian Middle Ages via the Islamic culture around 1000 AD, profoundly influencing the intellectual life of Europe until the late Middle Ages. In contrast to ‘modern formal logic’‚ÄĒfrom the late 19th century onwards‚ÄĒthe ‘Aristotelian logic’ is also referred to as ‘classical logic’.

If one disregards many details, classical and modern logic differ fundamentally in one aspect: in classical logic, the ‘linguistic meaning’ of the expressions used plays an important role, whereas in modern logic, linguistic meaning is completely excluded. ‘Mutilated remnants’ of meaning can still be found in the concept of an ‘abstract truth’, which is reflected in ‘abstract truth values’, but their ‘meaning content’ is completely empty.

The concept of both classical and modern logic‚ÄĒdespite all differences‚ÄĒis united by the concept of ‘logical reasoning’: Suppose one has a set of expressions that are deemed ‘somehow true’ by the users of logic, then there are ‘rules of application’ on how to generate other expressions from the set of ‘assumed true expressions’, which can then also be considered ‘true expressions’. This ‘generation’ of new expressions from existing ones is called ‘reasoning’ or ‘inference’, and the ‘result’ of the reasoning is then a ‘conclusion’ or ‘inference’.

A more modern‚ÄĒformulaically abbreviated‚ÄĒnotation for this matter would be:

A ‚äĘTr B

Here, the symbol ‘A’ represents a set of expressions assumed to be true, ‘Tr’ stands for a set of transformation instructions (usually called ‘rules of inference or inference rules’), ‘B’ stands for a generated (derived) expression, and ‘‚äĘ’ refers to an ‘action context’ within which users of logic use transformation rules to ‘generate B based on A’.

A ‘normal’ logician, in the case of the symbol ‘‚äĘ’, does not speak of an ‘action context’ but usually just of a ‘concept of inference’ or‚ÄĒwith an eye to the widespread use of computers‚ÄĒof an ‘inference mechanism’; however, this way of speaking should not obscure the fact that ‘what actually exists’ are once concrete ‘objects’ in the form of expressions ‘A’ and ‘B’, and also in the form of expressions ‘Tr’. These expressions as such have neither any ‘meaning’ nor can these expressions ‘generate anything by themselves’. For the concrete expressions ‘B’ to be classified as ‘inference’ from the expressions ‘A’, which are ‘really generated’ by means of ‘Tr’, a real ‘process’ must take place in which ‘B’ is ‘really generated’ from ‘A’ ‘in the sense of Tr’.

A process is a real event ‘in time’, in which there is a real state that contains the object ‘A’, and a real logic user who has a ‘concept = model’ of ‘logical reasoning’ in his head, in which the ‘expressions’ of the generation rules Tr are linked with concrete process steps (the meaning of the expressions Tr), so that the logic user can identify the expressions belonging to A as part of the generation rules in a way that the generation rules can assign a new expression B to the expressions A. If this assignment ‘in the mind of the logic user’ (commonly referred to as ‘thinking’) is successful, he can then write down a new expression B referring to the concrete expressions Tr in a ‘subsequent situation’. Another logic user will only accept this new expression ‘B’ if he also has a ‘concept = model’ of logical reasoning in his head that leads to the same result ‘B’ in his mind. If the other logic user comes to a different result than ‘B’, he will object.

–!! Not finished yet! —

Collective Human Intelligence (CHI)

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 22, 2024, Last change: June 22, 2024)

Starting Point

The main theme of this section is ‚Äėcollective human intelligence (CHI)‚Äô. However, it does not occur in isolation, detached from everything else. Rather, life on Planet Earth creates a complex network of processes which ‚ÄĒ upon closer examination ‚ÄĒ reveal structures that are consistent across all forms of life in their basic parameters, yet differ in parts. It must also be considered that these basic structures, in their process form, always intertwine with other processes, interacting and influencing each other. Therefore, a description of these basic structures will initially be rather sketchy here, as the enormous variety of details can otherwise lead one quickly into the ‚Äėthicket of particulars‚Äô.

Important Factors

Basic Life-Pattern

Starting from the individual cell up to the most powerful cell galaxies [1] that life has produced so far, every identifiable form of life exhibits a ‘basic pattern’ of the life process that threads like a ‘red thread’ through all forms of life: as long as there is more than a single life system (a ‘population’), there exists throughout the entire lifespan the basic cycle (i) reproduction of Generation 1 – birth of Generation 2 – growth of Generation 2 – onset of behavior of Generation 2 accompanied by learning processes – reproduction of Generation 2 – ‚Ķ.

Genetic Determinism

This basic pattern, in the phases of ‘reproduction’ and ‘birth’, is largely ‘genetically determined’. [2] The growth process itself‚ÄĒthe development of the cell galaxy‚ÄĒis also fundamentally strongly genetically determined, but increasingly factors from the environment of the growth process come into play, which can modify the growth process to varying degrees. Thus, the outcomes of reproduction and growth can vary more or less significantly.

Learning

As growth transforms the cell galaxy into a ‘mature’ state, the entire system can enable different ‘functions’ that increasingly facilitate ‘learning’.

A minimal concept of learning related to acting systems identifies the ‘learning of a system’ by the fact that the ‘behavior’ of a system changes in relation to a specific environmental stimulus over a longer period of time, and this change is ‘more than random’. [3]

‘Learning’ ranges on a scale from ‘relatively determined’ to ‘largely open’. ‘Open’ here means that the control by genetic presets decreases and the system can have individually different experiences within a spectrum of possible actions.

Experiences

Experiences gain their ‘format’ within the following coordinates:

(i) (sensory) perception,

(ii) abstractions of sensory perceptions which are generated internally and which

(iii) are retrievable inside the system. The totality of such conditionally retrievable (recallable) abstractions is also called memory content. [4]

(iv) the possibility of arbitrary abstractions from abstractions, the

(v) storage of sequential events as well as

(vi) abstractions of sequentially occurring events, and the

(vii) free combination of abstractions into new integrated units.

Additionally,

(viii) the ‘perception of internal bodily events’ (also called ‘proprioceptive’ perception) [5], which can ‘link (associate)’ with all other perceptions and abstractions. [6] It is also important to note that it is a characteristic of perception that

(ix) events usually never occur in isolation but appear as ‘part of a spatial structure’. Although ‘subspaces can be distinguished (visually, acoustically, tactilely, etc.), these subspaces can be integrated into a ‘total space’ that has the format of a ‘three-dimensional body space’, with one’s own body as part of it. ‘In thought’, we can consider individual objects ‘by themselves’, detached from a body space, but as soon as we turn to the ‘sensual space of experience’, the three-dimensional spatial structure becomes active.

Individual ‚Äď Collective

In the individual experience of everyday situations, this ‘inner world of experience’ largely forms in a multitude of ways, largely unconsciously.

However, as soon as ‘human systems’‚ÄĒin short, people‚ÄĒare not alone in a situation but together with others, these people can perceive and remember the same environment ‘similarly’ or ‘differently’. In their individual experiences, different elements can combine into specific patterns.

Coordination of Behavior

It becomes interesting when different people try to coordinate their behavior, even if it is just to make ‘contact’ with each other. And, although this can also be achieved in various ways without explicit symbolic language [7], sophisticated, complex collective action involving many participants over long periods with demanding tasks, according to current knowledge, is only possible with the inclusion of symbolic language.

The ability of humans to use languages seems to be fundamentally genetically conditioned. [8] ‘How’ language is used, ‘when’, ‘with which memory contents language is linked’, is not genetically conditioned. People must learn this ‘from the particular situation’ both individually and collectively, in coordination with others, since language only makes sense as a ‘common means of communication’ that should enable sophisticated coordination.

Linguistic Meaning

The system of sounds (and later symbols) used in a language is ultimately decided by those who initially use the language together. [9] Given the fact that there are many thousands of languages [10], one can conclude that there is considerable freedom in the specific design of a language. [11]

This ‘freedom in the specific design’ is most evident in the association between language elements and potential meanings. The same ‘object’, ‘event’, or ‘fact’ can be expressed completely differently by people using different languages. [12] And this ‘difference’ refers not only to the naming of individual properties or objects but occurs within a ‘larger context’ of ‘perspectives’ on how the everyday world is perceived, classified, and enacted within a specific language community. These differences also occur ‘within a language community’ in the form of ‘milieus’/’social strata’, where the ‘practice of life’ differs.

The most important aspect of the relationship between the language system (sounds, symbols, the arrangement of sounds and symbols) and possible ‘meaning content’ is that any type of ‘content’ is located within the person (inside the ‘system’).[13]

As the brief sketch above on the space of experience suggests, the elements that can constitute an experiential space are so layered and dynamic that an assignment between language elements and elements of experience is fundamentally incomplete. Additionally, two different systems (people) can only agree on elements of experience as ‘linguistic meaning’ if they have ‘common reference points’ in their individual experiential spaces. Due to the structure of the system, there exist only the following ‘types of reference points’:

(i) There are sensory perceptions of circumstances in a commonly shared environment that can be perceived ‘sufficiently’ similarly by all participants (and are automatically (unconsciously!) transformed into more abstract units in the experiential space).

(ii) There are internal body perceptions that normally occur similarly in each person due to a genetically determined structural similarity.[14]

(iii) There are internal perceptions of internal processes that also normally occur similarly in each person due to a genetically determined structural similarity. [15]

The more ‘similar’ such internal structures are and the more ‘constant’ they occur, the greater the likelihood that different language participants can form ‘internal working hypotheses’ about what the other means when using certain linguistic expressions. The more the content of experience deviates from types (i) ‚Äď (iii), the more difficult or impossible it becomes to reach an agreement.

The question of ‘true statements’ and ‘verifiable predictions’ is a specific subset of the problem of meaning, which is treated separately.

Complex Language Forms

Even the assignment and determination of meaning in relatively ‘simple’ linguistic expressions is not straightforward, and it becomes quickly ‘blurred’ and ‘vague’ in ‘more complex language forms’. The discussions and research on this topic are incredibly extensive.[16]

I would like to briefly remind you of the example of Ludwig Wittgenstein, who first experimentally played through the very simple meaning concept of modern formal logic in his early work, ‘Tractatus Logico-Philosophicus’ (1921), but then, many years later (1936 ‚Äď 1946), reexamined the problem of meaning using everyday language and many concrete examples. He discovered ‚ÄĒ not surprisingly ‚ÄĒ that ‘normal language’ functions differently and is significantly more complex than the simple meaning model of formal logic assumed. [17] What Wittgenstein discovered in his ‘everyday’ analyses was so multi-layered and complex that he found himself unable to transform it into a systematic presentation.[18]

Generally, it can be said that to this day there is not even a rudimentary comprehensive and accurate ‘theory of the meaning of normal language’. [19]

The emergence and proliferation of ‘generative artificial intelligence’ (Generative AI) in recent years [20] may offer an unexpected opportunity to revisit this problem entirely anew. Here, modern engineering demonstrates that simple algorithms, which possess no inherent meaning knowledge of their own, are capable of producing linguistic output by using only language material in linguistic interaction with humans. [21] This output is substantial, structured, and arranged in such a way that humans perceive it as if it were generated by a human, which is ultimately true. [22] What a person admires in this linguistic output is essentially ‘himself’, though not in his individual language performances, which are often worse than those generated by the algorithms. What the individual user encounters in such generated texts is essentially the ‘collective language knowledge’ of millions of people, which would not be accessible to us without these algorithms in this extracted form.

These generative algorithms [23] can be compared to the invention of the microscope, the telescope, or modern mathematics: all these inventions have enabled humans to recognize facts and structures that would have remained invisible without these tools. The true extent of collective linguistic performances would remain ‘invisible’ without modern technology, simply because humans without these technologies could not comprehend the scope and scale, along with all the details.

Preliminary Interim Result

The considerations so far only give a first outline of what collective intelligence can be or is.

[1] Reminder: If we assume that the number of stars in our home galaxy, the Milky Way, is estimated at 100 ‚Äď 400 billion stars and we take 200 billion as our estimate, then our body system would correspond to the scope of 700 galaxies the size of the Milky Way, one cell for one star‚Ķ a single body!

[2] We know today that genes can change in different ways or be altered during various phases.

[3] Simplifying, we can say that ‘randomness’ in a ‘distribution-free form’ means that each of the known possible continuations is ‘equally likely’; none of the possible continuations shows a ‘higher frequency’ over time than any of the others. Randomness with an ‘implicit distribution’ is noticeable because, although the possible continuations are random, certain continuations show a different frequency over time than others. All these particular individual frequencies together reveal a ‘pattern’ by which they can be characterized. An example of ‘randomness with a distribution’ is the ‘natural (or Gaussian) distribution’.

[4] The concept of ‘memory’ is a theoretical notion, the empirical and functional description of which has not yet been fully completed. Literature distinguishes many different ‘forms of memory’ such as ‘short-term memory’, ‘long-term memory’, ‘sensory memory’, ‘episodic memory’, and many more.

[5] For example, ‘joint positioning’, ‘pain’, ‘feeling of fullness’, ‘discomfort’, ‘hunger’, ‘thirst’, various ’emotions’, etc.

[6] One then perceives not just a ‘blue sky’ and feels a ‘light breeze on the skin’ at the same time, but also experiences a sense of ‘malaise’, ‘fever’, perhaps even a ‘feeling of dejection’ and the like.

[7] In biological behavioral research, there are countless examples of life forms showing impressive coordination achievements, such as during collective hunting, organizing the upbringing of offspring, within the ‘family unit’, learning the ‘dialects’ of their respective languages, handling tools, etc.

[8] Every child can learn any known human language anywhere, provided there is an environment in which a language is practiced. Many thousands of different languages are known.

[9] The exact mechanism by which a language first arises among a group of people remains unclear to this day. However, there are increasingly more computer simulations of language learning (language evolution) that attempt to shed light on this. Given the enormous number of factors involved in real language use, these simulations still appear ‘rather simple’.

[10] As we know from many records, there were many other languages in previous times that eventually died out (‘dead languages’), and only written records of them exist. It is also interesting to consider cases in which a language has ‘evolved’, where older forms continue to exist in texts.

[11] Language research (many different disciplines work together here) suggests the working hypothesis that (i) the type and scope of sounds used in a language, due to the human speech apparatus, represent a finite set that is more or less similar across all languages, although there are still different sets of sounds between the major language families. Furthermore, (ii) many analyses of the structure of spoken and then written language suggest that there are ‘basic structures’ (‘syntax’) that can be found‚ÄĒwith variations‚ÄĒin all languages.

[12] Anyone who comes into contact with people who speak a different language can experience this up close and concretely.

[13] In light of modern brain research and general physiology, the ‘location’ here would be assumed to be the brain. However, this location is of little use, as the research into the material brain in the body, along with its interactions with the surrounding body, has hardly been able to grasp the exact mechanisms of ‘meaning assignments’ (just as one cannot identify the algorithm being executed based on the physical signals of computer chips alone from those signals).

[14] For example, ‘hunger’, ‘thirst’, ‘toothache’‚Ķ

[15] The ‘remembering’ of something that has happened before; the ‘recognition’ of something sensually concrete that one can ‘remember’; the ‘combining’ of memorable things into new ‘constellations’, and much more.

[16] Parts of this discussion can be found in the context of ‘text analyses’, ‘text interpretations’, ‘hermeneutics’, ‘Bible interpretation’, etc.

[17] Which is not surprising at all, since modern formal logic could only arise because it had programmatically radically departed from what has to do with everyday linguistic meaning. What was left were only ‘stubs of an abstract truth’ in the form of abstract ‘truth values’ that were devoid of any meaning.

[18] His posthumously published ‘Philosophical Investigations’ (1953) therefore offer ‘only’ a collection of individual insights, but these were influential enough to impact reflections on linguistic meaning worldwide.

[19] The list of publications titled around the ‘meaning of language’ is exceedingly long. However, this does not change the fact that none of these publications satisfactorily solve the problem comprehensively. It is currently not foreseeable how a solution could emerge, as this would require the cooperation of many disciplines, which in current university operations are well distributed and separated into ‘existences of their own’.

[20] With chatGPT as an example.

[21] Millions of texts produced by humans for the purpose of ‘communicating content’.

[22] Which ultimately is true, because the algorithms themselves do not ‘invent’ text, but use ‘actually used’ linguistic expressions from existing texts to generate ‘highly probable’ combinations of expressions that humans would likely use.

[23] These cannot be seen in isolation: without extremely powerful computing centers along with corresponding global networks and social structures that make widespread use possible, these algorithms would be worthless. Here, indirectly, what has become possible and functions in everyday life due to collective human intelligence also shines through.

Blind’s World One (1995!)

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Last modified: June 14, 2024)

Starting Point

How can one philosophically conceive of artificial intelligence (AI) interacting with real people, an AI that learns real language with real meaning on its own? Prompted by an offer from Ars Electronica ’95 to introduce a philosophically inspired art project, I spent intense months with an ad hoc software team (the team was wonderful!) designing (and implemented by the software team) the interactive network version of a small artificial world based on philosophical considerations. In this world lived ‘blind Knowbots’ that could communicate with the outside world, using their other sensory experiences and basic bodily states as the basis for assigning meanings to their respective languages. Whatever language the users (mostly children!) used, they could link this language with their real-world experiences. This experiment has shaped me for many years, actually up to today.

Blind’s World One

(The text was copied from the Ars Electronica ’95 book since the text is no longer accessible)

Humans and machines that can generate sound

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 14, 2024, Last Modification: June 14, 2024)

Starting Point

Since September 2015, I have been repeatedly trying‚ÄĒboth theoretically and practically‚ÄĒto understand what sound art really is; what is sound? What does it do to us? One consideration led to another; between them were real experiments and live performances. There were also long periods of ‘standstill’‚Ķ. At a sound art concert on June 11, 2024, at Mousonturm in Frankfurt, something clicked in my mind regarding a fundamental question, and suddenly the uniqueness of ‘collective human intelligence’ in confrontation with so-called ‘intelligent machines’ became somehow newly clearer to me.

XEROX EXOTIQUE #090 ‚Äď IMPRESSIONS

This post on an associated blog is about people and machines that can generate sound.

The trigger was a sound art event at the Mousonturm in Frankfurt am Main on June 11, 2024.

Here comes the translation:

A Hint from a Friend

Following a tip from Tobias (PiC, Xerox Exotique, …), I made a trip yesterday to the sound art event #090, organized by Xerox Exotique at the Mousonturm in Frankfurt am Main.

Impressions

SKETCH: Mousonturm, a small event area to the right of the entrance with a small stage. Some participants are highlighted. Detailed information about the event can be found on the XEROX EXOTIQUE website (xeroxex.de).

What to Talk About?


A sound art event like this offers numerous starting points for discussion…

Since the beginning of Philosophy in Concert (PiC), I have been driven by the question of how to situate soundscapes in the life world of people so that they do not seem like ‘foreign bodies,’ somehow ‘detached’ from the process of humans on this planet, but as a ‘living part’ of this very real-dynamic process made visible.

At concerts based on written music (scores‚Ķ), it all revolves around the sets of symbols that someone has produced, which others convert into sounds, and perhaps about the person who holds the ‘office of the interpreter’ and tells other implementers how they should convert. The ‘typically human’ aspect may then be recognized in the ‘background of the notation’, in the way of ‘converting’ or ‘interpreting’, and then the effect of the sound cloud in the room on the people who sit, listen, and experience various emotions‚Ķ

How much of the human process is revealed in such a form of event?

There is almost never any talking, and if there is, what is there to talk about? About one’s own feelings? About the technical intricacies of the written? About the skill of the converters? About the beauty of a voice? Yes, it’s not easy to integrate the sound event into the life process‚Ķ and yet, it affects somehow, one remembers, talks about it later, may rave or complain‚Ķ

The Hidden Human


Let‚Äôs briefly change the context and dive directly into the current global euphoria many people have over the new chatbots, which increasingly fascinate more people in everyday life, products of ‘generative Artificial Intelligence’ (chatGPT & Co).

The algorithms behind the interface are comparatively simple (although the global deployment is due to impressive engineering). What fascinates people in front of the interface is ‘how human the algorithms appear in the interface’. They use everyday language just as ‘we humans’ do, ultimately even better than most of those who sit in front of it. And ‚ÄĒ almost irresistibly ‚ÄĒ many see, because of this language and the accessible knowledge ‘behind the interface’, not a simple machine but something ‘profoundly human’. What is ‘human’ about this appearance, however, are the words, sentences, and texts that the simple algorithm has compiled from millions of documents, all of which come from humans. On its own, this algorithm cannot generate a single sentence! It lacks fundamental prerequisites. The ‘actual’ wonder sits in front of the ‘apparent’ wonder: it is we humans, who have, are, and represent something that we are barely aware of ourselves (we are ‘blind through ourselves’), and we marvel when simple algorithms show us what we are‚Ķ ultimately like a mirror of humanity, but most do not notice; we get excited about simple algorithms and forget that we ourselves are exactly this wonder that has produced all this, continues to produce‚Ķ we become blind to the real wonder that we ourselves are, each of us, all together.

Collective Intelligence ‚Äď Collective ‘Spirit’‚Ķ

In the case of algorithms, the term ‘artificial intelligence (AI)’ has been used for a long time, and more moderately, ‘machine learning (ML)’. However, the concept of intelligence has not yet been truly standardized, even though psychology has developed and experimentally researched interesting concepts of ‘intelligence’ (e.g., the ‘Intelligence Quotient (IQ)’) for humans for about 120 years. The communication between psychology and computer science, however, has never been very systematic; rather, everyone does ‘their own thing’. Thus, precisely determining the relationship between ‘human intelligence (HI)’ and ‘artificial intelligence (AI)’ has so far been quite difficult; the terms are too vague, not standardized. Moreover, it is complicated by the fact that the ‘actually impressive achievements’ of humans are not their ‘individual achievements’ (although these are important), but everything that ‘many people together over a long time’ have accomplished or are accomplishing. The term ‘Collective Human Intelligence (CHI)’ is in this direction but is probably too narrow, as it’s not just about ‘intellect’ but also about ‘communication’, ’emotions’, and ‘goals’. Unfortunately, research on the topic of Collective Human Intelligence is still far behind. The focus on the individual runs deep, and then in times of artificial intelligence, where individual machines achieve remarkable feats (under the premise of the collective achievements of humans!), even the study of individual human intelligence has fallen into the shadow of attention.

How do we get out of this impasse?

Sound Art as a Case Study?


I hadn’t attended a sound art concert in many years. But there were still memories, various aspects swirling through my mind.

The tip from Tobias catapulted me out of my usual daily routines into such a sound art event at the Mousonturm on June 11, 2024, at 8:00 pm.

As I said, there is a lot to talk about here. For a long time, I have been preoccupied with the question of the ‘collective’ dimension in human interaction. The ‘synchronization’ of people by algorithms is nothing unusual. In a way, humans have always been ‘standardized’ and ‘aligned’ by the prevailing ‘narratives,’ and the rapid spread of modern ‘narratives’ and the speed with which millions of people worldwide join a narrative is a fact. Most people (despite education) are apparently defenseless against the emergence of ‘narratives’ at first, and then very soon so strongly ‘locked-in’ that they reproduce the narratives like marionettes.

What role can ‘sound art’ play against such a backdrop? Sound art, where there is nothing ‘written’, no ‘central interpreter’, no ‘converters of the written’, but, yes, what?

That evening, the first group, ‘Art Ensemble Neurotica’, seemed to me to most broadly illustrate the profound characteristics of sound art. In the two following solo acts, where the individual performer interacted with sound they themselves produced, the special dimension of sound art was also present, in my view, but more concealed due to the arrangement.

In the case of Neurotica: Four people generated sound, live, each individually: Dirk H√ľlstrunk (narrator) ‚Äď Michael Junck (digital devices) ‚Äď Johannes Aeppli (percussion) ‚Äď Guido Braun (strings & conductor). Each person on stage was a ’cause’, equipped with instruments that allowed all sorts of sound effects. There were no written notes; there hadn’t been a real rehearsal beforehand, but some arrangements (according to Guido).

Anyone who knows how diversely each individual can generate sound under these conditions can imagine that this seemingly infinite space can give rise to tension about what will happen next?

Describing the totality of sound that emanated from the four performers upfront for 45 minutes is nearly impossible in detail. At no stage did it seem (I exchanged views immediately afterwards with Roland (incorrectly identified as Robert in the sketch) next to me‚ÄĒwe didn’t know each other, it was a coincidence we sat next to each other), that one sound source drowned out or overwhelmed another; everything appeared side by side and intertwined in a somehow ‘fitting form’, appealing and stimulating. Patterns from all four individual sources could be recognized interacting with each other over extended phases, yet they were supple, changing shape. Effects like volume shifts, echo, reverb, distortion, etc., did not feel out of place but seemed ‘harmonic’‚Ķ giving each source a ‘character’ that combined with the others to form an overall impression‚Ķ

Can such an arrangement of sounds be taken ‘purely abstractly’, detached from their creators? Could software generate such a complex sound event?

While the listener initially hears only the produced sound and might not immediately decide from this perspective whether it matters who and how this sound is produced, from the perspective of creation it quickly becomes clear that these sounds cannot be isolated from the producer, from the ‘inner states’ of the producer. Ultimately, the sound is created in the moment, in the interaction of many moments inside each individual actor (human), and this individual is not ‘alone’, but through his perception and many jointly experienced sound processes, each possesses a ‘sound knowledge’ that he more or less ‘shares internally’ with others, and thus each can bring his current inner states into a ‘dialogue’ with this ‘shared sound knowledge’. It is precisely this inner dialogue (largely unconscious) that provides opportunities for complex synchronizations, which an individual alone, without a shared history of sound, could not have. The resulting complex sounds are therefore not just ‘sound’ but are more manifestations of the internal structures and processes of the creators, which as internal meaning are linked with the external sound: Sound art sound is therefore not just sound one hears, it is also fully a kind of ‘communication’ of ‘inner human states’, spread over various collaborating individuals, thus a true collective event that presupposes the individual but extends far beyond in the happening. In this form of distributed sound art, the individual can experience themselves as a ‘WE’ that would otherwise be invisible.

Postscript


So, I now have this strange feeling that participating in this sound art event has led me deeper into the great mystery of us humans, who we are, that we have a special dimension of our existence in our ability to ‘collectively feel, think, and act,’ which somewhat liberates us from ‘individuality’ towards a very special ‘We’.

While a soundscape is ‘authentic’ and as such not ‘underminable’, ‘narrative spaces’‚ÄĒthe use of language with an assumed, but not easily controllable potential meaning‚ÄĒare extremely ‘dangerous’ spaces: assumed meanings can be false and‚ÄĒas we can see today on a global scale‚ÄĒare predominantly wrong with correspondingly devastating consequences. Moving in distributed sound spaces has its ‘meaning’ ‘within itself’; the ‘Self in sound together’ is not underminable; it is mercilessly direct. Perhaps we need more of this‚Ķ

Changes

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 14, 2024, Last Modification: June 14, 2024)

Starting Point

In both the section “Talking about the World” and the section “Verifiable Statements,” the theme of ‘change’ continuously emerges: our everyday world is characterized by everything we know being capable of ‘changing,’ including ourselves, constantly, often unconsciously; it just happens. In the context of people trying to collectively create an understanding of the world, perhaps also attempting to ‘plan’ what should be done together to achieve the best possible living situation for as many as possible in the future, the phenomenon of ‘change’ presents an ambivalent challenge: if there were no change, there would be no future, only ‘present’; but with change occurring, it becomes difficult to ‘look into the future’. How can we know into what future state all these changes will lead us? Do we even have a chance?

Changes

Motivation


In the current scenario, we assume a context of people trying to collectively form a picture of the world, who may also be attempting to ‘plan’ joint actions. It’s essential to recognize that the ‘relevant’ topics of interest are influenced by ‘which people’ one is working with, as each group within a society can and often does have its ‘own perspectives’. It is not only in ‘autocratic’ societal systems that citizens’ perspectives can be easily overlooked; there are plenty of examples in officially ‘democratic’ systems where citizens’ concerns are also overlooked, warranting closer analysis.

This discussion initially focuses on the fundamental mechanisms of ‘change’, specifically the ‘collective description’ of changes. The motivation for this emphasis stems from the fact that different people can only ‘coordinate (align) their actions’ if they first manage to ‘communicate and agree’ on the ‘contents of their actions’ through ‘communication processes’.

While simple situations or small groups may manage with verbal communication alone, most scenarios require ‘written texts’ (documents). However, written text has a disadvantage compared to direct speech: a ‘text’ can be ‘read’ in a situation where the ‘reader’ is not currently in the situation being described. In terms of ‘verifiability of statements’, this presents a real challenge: every text, due to ‘learned meaning relationships’, automatically has a ‘meaning’ that is activated ‘in the mind of the reader’, but it is crucial to verify whether there is a ‘real verifiable correspondence’ to the situation ‘described’ in the text.

If we assume that a group of people seriously contemplates a ‘future’ that they believe is ‘more likely to occur than not’‚ÄĒnot just ‘theoretically’ but ‘actually’‚ÄĒthen there must be a way to design the description of a ‘starting situation’ such that all participants have a chance to verify its accuracy in their shared everyday life.

Verifiable Statements

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 7, 2024, Last change: June 9, 2024)

Starting Point

Speaking in everyday life entails that through our manner of speaking, we organize the perceptions of our environment, solely through our speech. This organization occurs through thinking, which manifests in speaking. As previously described, while the ability to speak is innate to us humans, the way we use our speech is not. In speaking, we automatically create an order, but whether this order actually corresponds to the realities of our everyday world requires additional verification. This verification, however, does not happen automatically; we must explicitly desire it and carry it out concretely.

Verifiable Statements

If one accepts the starting point that linguistic expressions, which enable our thinking, are initially ‘only thought’ and require additional ‘verification in everyday life’ to earn a minimal ‘claim to validity in practice’, then this basic idea can be used as a starting point for the concept of ’empirical verifiability’, which is seen here as one of several ‘building blocks’ for the more comprehensive concept of an ’empirical theory (ET)’.

Language Without Number Words


Here are some everyday examples that can illustrate some aspects of the concept of ’empirical verifiability’:

Case 1: There is an object with certain properties that the involved persons can perceive sensorily. Then one person, A, can say: ‘There is an object X with properties Y.’ And another person, B, can say: ‘Yes, I agree.’

Case 2: A specific object X with properties Y cannot be sensorily perceived by the involved persons. Then person A can say: ‘The object X with properties Y is not here.’ And another person, B, can say: ‘Yes, I agree.’

Case 3: There is an object with certain properties that the involved persons can sensorily perceive, which they have never seen before. Then person A can say: ‘There is an object with properties that I do not recognize. This is new to me.’ And another person, B, can then say: ‘Yes, I agree.’

The common basic structure of all three cases is that there are at least two people who ‘speak the same language’ and are in a ‘shared situation’ in everyday life. One person‚ÄĒlet’s call him A‚ÄĒinitiates a conversation with a ‘statement about an object with properties,’ where the statement varies depending on the situation. In all cases, the person addressed‚ÄĒlet’s call him B‚ÄĒcan ‘agree’ to A’s statements.

The three cases differ, for example, in how the object ‘appears’: In case 1, an object is ‘simply there,’ one can ‘perceive’ it, and the object appears as ‘familiar.’ In case 2, the object is known, but not present. In case 3, there is also an object, it can be perceived, but it is ‘not known.’

For the constructive success of determining an agreement that finds approval among several people, the following elements are assumed based on the three cases:

The participants possess:

  • ‘Sensory perception’, which makes events in the environment recognizable to the perceiver.
  • ‘Memory’, which can store what is perceived.
  • ‘Decision-making ability’ to decide whether (i) the perceived has been perceived before, (ii) the perceived is something ‘new,’ or (iii) an object ‘is no longer there,’ which ‘was there before.’
  • A sufficiently similar ‘meaning relationship’, which enables people to activate an active relationship between the elements of spoken language and the elements of both perception and memory, whereby language elements can refer to contents and vice versa.

Only if all these four components [2] are present in each person involved in the situation can one convey something linguistically about their perception of the world in a way that the other can agree or disagree. If one of the mentioned components (perception, memory, decision-making ability, meaning relationship) is missing, the procedure of determining an agreement using a linguistic expression is not possible.

[1] There are many different cases!

[2] These four concepts (perception, memory, decision-making ability, meaning relationship) are ‘incomprehensible on their own.’ They must be explained in a suitable context later on. They are used here in the current concept of ‘verifiable statements’ in a functional context, which characterizes the concept of ‘verifiable statement’.

Language with Numerals


Typically, everyday languages today include numerals (e.g., one, two, 33, 4400, …, 1/2, 1/4), although they vary in scope.

Such numerals usually refer to some ‘objects’ (e.g., three eggs, 5 roses, 33 potatoes, 4400 inhabitants, ‚Ķ 1/2 pound of flour, 44 liters of rainfall in an hour, ‚Ķ) located in a specific area.

A comprehensible verification then depends on the following factors:

  • Can the specified number or quantity be directly determined in this area (a clear number must come out)?
  • If the number or amount is too large to estimate directly in the area, is there a comprehensible procedure by which this is possible?
  • What is the time required to make the determination in the area (e.g., minutes, hours, days, weeks, ‚Ķ)? If the necessary time always increases, it becomes increasingly difficult to make the statement for a specific time (e.g., the number of residents in a city).

These examples show that the question of verification quickly encompasses more and more aspects that must be met for the verifiability of a statement to be understood and accepted by all involved.

Language with Abstractions


Another pervasive feature of everyday languages is the phenomenon that, in the context of perception and memory (storing and recalling), abstract structures automatically form, which are also reflected in the language. Here are some simple examples:

IMAGE: Four types of objects, each seen as concrete examples of an abstract type (class).


In everyday life, we have a word for the perceived objects of types 1-4, even though the concrete variety makes each object look different: In the case of objects of group 1, we can speak of a ‘clock,’ for group 2 of a ‘cup,’ for 3 of ‘pens,’ and in the case 4 of ‘computer mice,’ or simply ‘mice,’ where everyone knows from the context that ‘mouse’ here does not mean a biological mouse but a technical device related to computers. Although we ‘sensorily’ see something ‘different’ each time, we use the ‘same word.’ The ‘one word’ then stands for potentially ‘many concrete objects,’ with the peculiarity that we ‘implicitly know’ which concrete object is to be linked with which word. If we were not able to name many different concrete objects with ‘one word,’ we would not only be unable to invent as many different words as we would need, but coordination among ourselves would completely break down: how could two different people agree on what they ‘perceive in the same way’ if every detail of perception counted? The same object can look very different depending on the angle and lighting.

The secret of this assignment of one word to many sensually different objects lies not in the assignment of words to elements of knowledge, but rather the secret lies one level deeper, where the events of perception are transformed into events of memory. Simplifying, one can say that the multitude of sensory events (visual, auditory, gustatory (taste), tactile, ‚Ķ) after their conversion into chemical-physical states of nerve cells become parts of neuronal signal flows, which undergo multiple ‘processings’. As a result, the ‘diversity of signals’ is condensed into ‘abstract structures’ that function as a kind of ‘prototype’ connected to many concrete ‘variants.’ There are thus something like ‘core properties’ that are ‘common’ to different perception events like ‘cup,’ and then many ‘secondary properties’ that can also occur, but not always, the core properties do. In the case of the ‘clock,’ for example, the two hands along with the circular arrangement of marks could be such ‘core properties.’ Everything else can vary greatly. Moreover, the ‘patterns of core and secondary properties’ are not formed once, but as part of processes with diverse aspects e.g., possible changes, possible simultaneous events, etc., which can function as ‘contexts’ (e.g., the difference between ‘technical’ and ‘biological’ in the case of the term ‘mouse’).

Thus, the use of a word like ‘clock’ or ‘cup’ involves‚ÄĒ as previously discussed‚ÄĒonce the reference to memory contents, to perceptual contents, to learned meaning relationships, as well as the ability to ‘decide’ which of the concrete perception patterns belong to which learned ‘prototype.’ Depending on how this decision turns out, we then say ‘clock’ or ‘cup’ or something else accordingly. This ability of our brain to ‘abstract,’ by automatically generating prototypical ‘patterns’ that can exemplify many sensorially different individual objects, is fundamental for our thinking and speaking in everyday life. Only because of this ability to abstract can our language work.

It is no less impressive that this basic ‘ability to abstract’ of our brain is not limited to the relationship between the two levels ‘sensory perception’ and ‘storage in memory,’ but works everywhere in memory between any levels. Thus, we have no problem grouping various individual clocks based on properties into ‘wristwatches’ and ‘wall clocks.’ We know that cups can be seen as part of ‘drinking vessels’ or as part of ‘kitchenware.’ Pens are classified as ‘writing instruments,’ and ‘computer mice’ are part of ‘computer accessories,’ etc.

Often, such abstraction achievements are also referred to as ‘categorizations’ or ‘class formation,’ and the objects that are assigned to such class designations then form the ‘class content,’ where the ‘scope’ of a class is ‘fluid.’ New objects can constantly appear that the brain assigns to one class or another.

Given this diversity of ‘abstractions,’ it is not surprising that the assignment of individual objects to one of these classes is ‘fluid,’ ‘fuzzy.’ With the hundreds or more different shapes of chairs or tables that now exist, it is sometimes difficult to decide, is this still a ‘chair’ or a ‘table’ in the ‘original sense’ [2] or rather a ‘design product’ in search of a new form.

For the guiding question of the verifiability of linguistic expressions that contain abstractions (and these are almost all), it follows from the preceding considerations that the ‘meaning of a word’ or then also the ‘meaning of a linguistic expression’ can never be determined by the words alone, but almost always only by the ‘context’ in which the linguistic expression takes place. Just as the examples with the ‘numerical words’ suggest, so must one know in a request like “Can you pass me my cup” which of the various cups was the ‘speaker’s cup.’ This presupposes the situation and ‘knowledge of the past of this situation’: which of the possible objects had he used as his cup?[3]

Or, when people try to describe a street, a neighborhood, a single house, and the like with language. Based on the general structures of meaning, each reader can form a ‘reasonably clear picture’ ‘in his head’ while reading, but almost all details that were not explicitly described (which is normally almost impossible) are then also not present in the reconstructed ‘picture in the head’ of the reader. Based on the ‘experience knowledge’ of the language participants, of course, everyone can additionally ‘color in’ his ‘picture in the head.'[4]

If a group of people wants to be sure that a description is ‘sufficiently clear,’ one must provide additional information for all important elements of the report that are ‘ambiguous.’ One can, for example, jointly inspect, investigate the described objects and/ or create additional special descriptions, possibly supplemented by pictures, sound recordings, or other hints.

When it comes to details, everyday language alone is not enough. Additional special measures are required.[5]

[1] A problem that machine image recognition has struggled with from the beginning and continues to struggle with to this day.

[2] The ‘original’ sense, i.e., the principle underlying the abstraction performance, is to be found in those neuronal mechanisms responsible for this prototype formation. The ‘inner logic’ of these neuronal processes has not yet been fully researched, but their ‘effect’ can be observed and analyzed. Psychology has been trying to approximate this behavior with many model formations since the 1960s, with considerable success.

[3] Algorithms of generative artificial intelligence (like chatGPT), which have no real context and which have no ‘body-based knowledge,’ attempt to solve the problem by analyzing extremely large amounts of words by breaking down documents into their word components along with possible contexts of each word so that they can deduce possible ‘formal contexts,’ which then function as ‘quasi-meaning contexts.’ To a certain extent, this works meanwhile quite well, but only in a closed word space (closed world).

[4] A well-known example from everyday life here is the difference that can arise when someone reads a novel, forms ideas in their head, and eventually someone produced a movie about the novel: to what extent do the ideas one has made of individual people correspond with those in the movie?

[5] Some may still know texts from so-called ‘holy scriptures’ of a religion (e.g., the ‘Bible’). The fundamental problem of the ‘ambiguity’ of language is of course intensified in the case of historical texts. With the passage of time, the knowledge of the everyday world in which a text was created is lost. Then, with older texts, there is often a language problem: the original texts, such as those of the Bible, were written in an old Hebrew (‘Old Testament’) or an old Greek (‘New Testament’), whose language use is often no longer known. In addition, these texts were written in different text forms, in the case of the Old Testament also at different times, whereby the text has also been repeatedly revised (which is often also connected with the fact that it is not clear who exactly the authors were). Under these conditions, deducing an ‘exact’ meaning is more or less restricted or impossible. This may explain why interpretations in the approximately 2000 years of ‘Bible interpretation’ have been very different at all times.

Talking about the world

This text is part of the text ‚ÄúRebooting Humanity‚ÄĚ

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 5, 2024, Last change: June 7, 2024)

Starting Point

A ‚Äėtext‚Äô shall be written that speaks about the world, including all living beings, with ‚Äėhumans‚Äô as the authors in the first instance. So far, we know of no cases where animals or plants write texts themselves: their view of life. We only know of humans who write from ‚Äėtheir human perspective‚Äô about life, animals, and plants. Much can be criticized about this approach. Upon further reflection, one might even realize that ‚Äėhumans writing about other humans and themselves‚Äô is not so trivial either. Even humans writing ‚Äėabout themselves‚Äô is prone to errors, can go completely ‚Äėawry,‚Äô can be entirely ‚Äėwrong,‚Äô which raises the question of what is ‚Äėtrue‚Äô or ‚Äėfalse.‚Äô Therefore, we should spend some thoughts on how we humans can talk about the world and ourselves in a way that gives us a chance not just to ‚Äėfantasize,‚Äô but to grasp something that is ‚Äėreal,‚Äô something that describes what truly characterizes us as humans, as living beings, as inhabitants of this planet‚Ķ but then the question pops up again, what is ‚Äėreal‚Äô? Are we caught in a cycle of questions with answers, where the answers themselves are again questions upon closer inspection?

First Steps

Life on Planet Earth

At the start of writing, we assume that there is a ‘Planet Earth’ and on this planet there is something we call ‘life,’ and we humans‚ÄĒbelonging to the species Homo sapiens‚ÄĒare part of it.

Language

We also assume that we humans have the ability to communicate with each other using sounds. These sounds, which we use for communication, we call here ‘speech sounds’ to indicate that the totality of sounds for communication forms a ‘system’ which we ultimately call ‘language.’

Meaning

Since we humans on this planet can use completely different sounds for the ‘same objects’ in the same situation, it suggests that the ‘meaning’ of speech sounds is not firmly tied to the speech sounds themselves, but somehow has to do with what happens ‘in our minds.’ Unfortunately, we cannot look ‘into our minds.’ It seems a lot happens there, but this happening in the mind is ‘invisible.’ Nevertheless, in ‘everyday life,’ we experience that we can ‘agree’ with others whether it is currently ‘raining’ or if it smells ‘bad’ or if there is a trash bin on the sidewalk blocking the way, etc. So somehow, the ‘happenings in the mind’ seem to have certain ‘agreements’ among different people, so that not only I see something specific, but the other person does too, and we can even use the same speech sounds for it. And since a program like chatGPT can translate my German speech sounds, e.g., into English speech sounds, I can see that another person who does not speak German, instead of my word ‘M√ľlltonne,’ uses the word ‘trash bin’ and then nods in agreement: ‘Yes, there is a trash bin.’ Would that be a case for a ‘true statement’?

Changes and Memories

Since we experience daily how everyday life constantly ‘changes,’ we know that something that just found agreement may no longer find it the next moment because the trash bin is no longer there. We can only notice these changes because we have something called ‘memory’: we can remember that just now at a specific place there was a trash bin, but now it’s not. Or is this memory just an illusion? Can I trust my memory? If now everyone else says there was no trash bin, but I remember there was, what does that mean?

Concrete Body

Yes, and then my body: time and again I need to drink something, eat something, I’m not arbitrarily fast, I need some space, ‚Ķ my body is something very concrete, with all sorts of ‘sensations,’ ‘needs,’ a specific ‘shape,’ ‚Ķ and it changes over time: it grows, it ages, it can become sick, ‚Ķ is it like a ‘machine’?

Galaxies of Cells

Today we know that our human body resembles less a ‘machine’ and more a ‘galaxy of cells.’ Our body has about 37 trillion (10¬Ļ¬≤) body cells with another 100 trillion cells in the gut that are vital for our digestive system, and these cells together form the ‘body system.’ The truly incomprehensible thing is that these approximately 140 trillion cells are each completely autonomous living beings, with everything needed for life. And if you know how difficult it is for us as humans to maintain cooperation among just five people over a long period, then you can at least begin to appreciate what it means that 140 trillion beings manage to communicate and coordinate actions every second‚ÄĒover many years, even decades‚ÄĒso that the masterpiece ‘human body’ exists and functions.

Origin as a Question

And since there is no ‘commander’ who constantly tells all the cells what to do, this ‘miracle of the human system’ expands further into the dimension of where the concept comes from that enables this ‘super-galaxy of cells’ to be as they are. How does this work? How did it arise?

Looking Behind Phenomena

In the further course, it will be important to gradually penetrate the ‘surface of everyday phenomena’ starting from everyday life, to make visible those structures that are ‘behind the phenomena,’ those structures that hold everything together and at the same time constantly move, change everything.

Fundamental Dimension of Time

All this implies the phenomenon ‘time’ as a basic category of all reality. Without time, there is also no ‘truth’‚Ķ

[1] Specialists in brain research will of course raise their hand right away, and will want to say that they can indeed ‘look into the head’ by now, but let’s wait and see what this ‘looking into the head’ entails.

[2] If we assume for the number of stars in our home galaxy, the Milky Way, with an estimated 100 ‚Äď 400 billion stars that there are 200 billion, then our body system would correspond to the scope of 700 galaxies in the format of the Milky Way, one cell for one star.

[3] Various disciplines of natural sciences, especially certainly evolutionary biology, have illuminated many aspects of this mega-wonder partially over the last approx. 150 years. One can marvel at the physical view of our universe, but compared to the super-galaxies of life on Planet Earth, the physical universe seems downright ‘boring’‚Ķ Don’t worry: ultimately, both are interconnected: one explains the other‚Ķ”

Telling Stories

Fragments of Everyday Life‚ÄĒWithout Context

We constantly talk about something: the food, the weather, the traffic, shopping prices, daily news, politics, the boss, colleagues, sports events, music, ‚Ķ mostly, these are ‘fragments’ from the larger whole that we call ‘everyday life’. People in one of the many crisis regions on this planet, especially those in natural disasters or even in war‚Ķ, live concretely in a completely different world, a world of survival and death.

These fragments in the midst of life are concrete, concern us, but they do not tell a story by themselves about where they come from (bombs, rain, heat,‚Ķ), why they occur, how they are connected with other fragments. The rain that pours down is a single event at a specific place at a specific time. The bridge that must be closed because it is too old does not reveal from itself why this particular bridge, why now, why couldn’t this be ‘foreseen’? The people who are ‘too many’ in a country or also ‘too few’: Why is that? Could this have been foreseen? What can we do? What should we do?

The stream of individual events hits us, more or less powerfully, perhaps even simply as ‘noise’: we are so accustomed to it that we no longer even perceive certain events. But these events as such do not tell a ‘story about themselves’; they just happen, seemingly irresistibly; some say ‘It’s fate’.

Need for Meaning

It is notable that we humans still try to give the whole a ‘meaning’, to seek an ‘explanation’ for why things are the way they are. And everyday life shows that we have a lot of ‘imagination’ concerning possible ‘connections’ or ’causes’. Looking back into the past, we often smile at the various attempts at explanation by our ancestors: as long as nothing was known about the details of our bodies and about life in general, any story was possible. In our time, with science established for about 150 years, there are still many millions of people (possibly billions?) who know nothing about science and are willing to believe almost any story just because another person tells this story convincingly.

Liberation from the Moment through Words

Because of this ability, with the ‘power of imagination’ to pack things one experiences into a ‘story’ that suggests ‘possible connections’, through which events gain a ‘conceptual sense’, a person can try to ‘liberate’ themselves from the apparent ‘absoluteness of the moment’ in a certain way: an event that can be placed into a ‘context’ loses its ‘absoluteness’. Just by this kind of narrative, the experiencing person gains a bit of ‘power’: in narrating a connection, the narrator can make the experience ‘a matter’ over which they can ‘dispose’ as they see fit. This ‘power through the word’ can alleviate the ‘fear’ that an event can trigger. This has permeated the history of humanity from the beginning, as far as archaeological evidence allows.

Perhaps it is not wrong to first identify humans not as ‘hunters and gatherers’ or as ‘farmers’ but as ‘those who tell stories’.

[1] Such a magic word in Greek philosophy was the concept of ‘breath’ (Greek “pneuma”). The breath not only characterized the individually living but was also generalized to a life principle of everything that connected both body, soul, and spirit as well as permeated the entire universe. In the light of today’s knowledge, this ‘explanation’ could no longer be told, but about 2300 years ago, this belief was a certain ‘intellectual standard’ among all intellectuals, the prevailing ‘worldview’; it was ‘believed’. Anyone who thought differently was outside this ‘language game’.

Organization of an Order

Thinking Creates Relationships

As soon as one can ‘name’ individual events, things, processes, properties of things, and more through ‘language’, it is evident that humans have the ability to not only ‘name’ using language but to embed the ‘named’ through ‘arrangement of words in linguistic expression’ into ‘conceived relationships’, thereby connecting the individually named items not in isolation but in thought with others. This fundamental human ability to ‘think relationships in one’s mind’, which cannot be ‘seen’ but can indeed be ‘thought’ [1], is of course not limited to single events or a single relationship. Ultimately, we humans can make ‘everything’ a subject, and we can ‘think’ any ‘possible relationship’ in our minds; there are no fundamental restrictions here.

Stories as a Natural Force

Not only history is full of examples, but also our present day. Today, despite the incredible successes of modern science, almost universally, the wildest stories with ‘purely thought relationships’ are being told and immediately believed through all channels worldwide, which should give us pause. Our fundamental characteristic, that we can tell stories to break the absoluteness of the moment, obviously has the character of a ‘natural force’, deeply rooted within us, that we cannot ‘eradicate’; we might be able to ‘tame’ it, perhaps ‘cultivate’ it, but we cannot stop it. It is an ‘elemental characteristic’ of our thinking, that is: of our brain in the body.

Thought and Verified

The experience that we, the storytellers, can name events and arrange them into relationships‚ÄĒand ultimately without limit‚ÄĒmay indeed lead to chaos if the narrated network of relationships is ultimately ‘purely thought’, without any real reference to the ‘real world around us’, but it is also our greatest asset. With it, humans can not only fundamentally free themselves from the apparent absoluteness of the present, but we can also create starting points with the telling of stories, ‘initially just thought relationships’, which we can then concretely ‘verify’ in our everyday lives.

A System of Order

When someone randomly sees another person who looks very different from what they are used to, all sorts of ‘assumptions’ automatically form in each person about what kind of person this might be. If one stops at these assumptions, these wild guesses can ‘populate the head’ and the ‘world in the head’ gets populated with ‘potentially evil people’; eventually, they might simply become ‘evil’. However, if one makes contact with the other, they might find that the person is actually nice, interesting, funny, or the like. The ‘assumptions in the head’ then transform into ‘concrete experiences’ that differ from what was initially thought. ‘Assumptions’ combined with ‘verification’ can thus lead to the formation of ‘reality-near ideas of relationships’. This gives a person the chance to transform their ‘spontaneous network of thought relationships’, which can be wrong‚ÄĒand usually are‚ÄĒinto a ‘verified network of relationships’. Since ultimately the thought relationships as a network provide us with a ‘system of order’ in which everyday things are embedded, it appears desirable to work with as many ‘verified thought relationships’ as possible.

[1] The breath of the person opposite me, which for the Greeks connected my counterpart with the life force of the universe, which in turn is also connected with the spirit and the soul…

Hypotheses and Science

Challenge: Methodically Organized Guessing

The ability to think of possible relationships, and to articulate them through language, is innate [1], but the ‘use’ of this ability in everyday life, for example, to match thought relationships with the reality of everyday life, this ‘matching’/’verifying’ is not innate. We can do it, but we don’t have to. Therefore, it is interesting to realize that since the first appearance of Homo sapiens on this planet [2], 99.95% of the time has passed until the establishment of organized modern science about 150 years ago. This can be seen as an indication that the transition from ‘free guessing’ to ‘methodically organized systematic guessing’ must have been anything but easy. And if today still a large part of people‚ÄĒdespite schooling and even higher education‚ÄĒ[3] tend to lean towards ‘free guessing’ and struggle with organized verification, then there seems to be a not easy threshold that a person must overcome‚ÄĒand must continually overcome‚ÄĒto transition from ‘free’ to ‘methodically organized’ guessing.[4]

Starting Point for Science

The transition from everyday thinking to ‘scientific thinking’ is fluid. The generation of ‘thought relationships’ in conjunction with language, due to our ability of creativity/imagination, is ultimately also the starting point of science. While in everyday thinking we tend to spontaneously and pragmatically ‘verify’ ‘spontaneously thought relationships’, ‘science’ attempts to organize such verifications ‘systematically’ to then accept such ‘positively verified guesses’ as ’empirically verified guesses’ until proven otherwise as ‘conditionally true’. Instead of ‘guesses’, science likes to speak of ‘hypotheses’ or ‘working hypotheses’, but they remain ‘guesses’ through the power of our thinking and through the power of our imagination.[5]

[1] This means that the genetic information underlying the development of our bodies is designed so that our body with its brain is constructed during the growth phase in such a way that we have precisely this ability to ‘think of relationships’. It is interesting again to ask how it is possible that from a single cell about 13 trillion body cells (the approximately 100 trillion bacteria in the gut come ‘from outside’) can develop in such a way that they create the ‘impression of a human’ that we know.

[2] According to current knowledge, about 300,000 years ago in East Africa and North Africa, from where Homo sapiens then explored and conquered the entire world (there were still remnants of other human forms that had been there longer).

[3] I am not aware of representative empirical studies on how many people in a population tend to do this.

[4] Considering that we humans as the life form Homo sapiens only appeared on this planet after about 3.8 billion years, the 300,000 years of Homo sapiens make up roughly 0.008% of the total time since there has been life on planet Earth. Thus, not only are we as Homo sapiens a very late ‘product’ of the life process, but the ability to ‘systematically verify hypotheses’ also appears ‘very late’ in our Homo sapiens life process. Viewed across the entire life span, this ability seems to be extremely valuable, which is indeed true considering the incredible insights we as Homo sapiens have been able to gain with this form of thinking. The question is how we deal with this knowledge. This behavior of using systematically verified knowledge is not innate too.

[5] The ability of ‘imagination’ is not the opposite of ‘knowledge’, but is something completely different. ‘Imagination’ is a trait that ‘shows’ itself the moment we start to think, perhaps even in the fact ‘that’ we think at all. Since we can in principle think about ‘everything’ that is ‘accessible’ to our thinking, imagination is a factor that helps to ‘select’ what we think. In this respect, imagination is pre-posed to thinking.

Empirical True?

CONTEXT

This text is part of the text ‘Rebooting humanity’

(The German version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

Start: May 30, 2024

Last change: May 31, 2024

Empirical True?

Hypotheses 2 Р4 …


With Hypothesis 1, a further paradox arises: If the structure of our human body (including its brain) is designed such that there is no direct, permanent one-to-one mapping of the real physical world outside the brain into the internal states of the body (including the brain), how can humans then make and use ’empirically true statements’ about something outside the body or outside the brain?

In everyday life, we can all have the following experiences:

When at least two people are involved and they have no special limitations, we can distinguish the following cases:

  1. There is an object with certain properties that the involved persons can perceive sensibly. Then one person A can say: ‘There is an object X with properties Y.’ And another person B can say: ‘Yes, I agree.’
  2. A certain object X with properties Y cannot be sensibly perceived by the involved persons. Then one person A can say: ‘The object X with properties Y is not there.’ And another person B can say: ‘Yes, I agree.’
  3. There is an object with certain properties that the involved persons can perceive sensibly, which they have never seen before. Then one person A can say: ‘There is an object with properties, which I do not know yet. This is new to me.’ And another person B can then say: ‘Yes, I agree.’
  4. A certain object X with properties Y cannot currently be sensibly perceived by the involved persons, but it was there before. Then one person A can say: ‘The object X with properties Y is no longer there.’ And another person B can say: ‘Yes, I agree.’

Introduction of Hypothesis 2
Case 1 becomes understandable if we assume that the sensory stimuli from object X with properties Y lead to activations in the sense organs, generating a sensory perception that can persist for the duration of object X’s presence.

To identify and classify this temporary perception as an ‘object of type X with properties Y,’ the involved persons must have a ‘memory’ that holds an ‘abstract object of type X with properties Y’ ready.

The ‘realized agreement’ between the perception of object X and the memory of a corresponding abstract object X then allows for the decision that there is a current perception of the abstract object X, whose ‘perceived properties’ ‘sufficiently match’ the ‘abstract properties.’

Important: this agreement occurring in the brain between a perceived object and a remembered object X does not imply anything about the real concrete circumstances that led to the perception of the object.[1]

This situation describes what is meant by Hypothesis 2: Persons can recognize a perceived object as an object of type X with properties Y if they have a memory available at the moment of the current perception.

Important: This Hypothesis 2 refers so far to what happens with and within an individual person. Another person normally cannot know about these processes. Internal processes in persons are ‚ÄĒ so far ‚ÄĒ not perceivable by others.[2]

[1] Modern simulation techniques can be so ‘real’ for most people that they make it difficult, if at all possible, to discern the ‘difference’ from the real world based solely on sensory perception. This would be the case where a sensory perception and a remembered abstract object in the brain show a substantial agreement, although there is no ‘real’ empirical object triggering the perception. … The computer itself, which ‘simulates’ something in a manner which looks for an observer ‘like being real’ (or the technical interface through which the computer’s signal reaches human sensors), is nevertheless a ‘real machine’ addressing the human sens organ ‘from the outside’.

[2] Even if modern neuroscientific measuring techniques can make electrical and chemical properties and activities visible, it is ‚ÄĒ so far ‚ÄĒ never possible to directly infer the functionalities hidden therein from these activities. Analogously, if one measures the electrical activities of the chips in a computer (which is possible and is done), one can never infer the algorithms currently being executed, even if one knows these algorithms!

Introduction of Hypothesis 3
Case 1 also includes the aspect that person A ‘verbally communicates’ something to person B. Without this verbal communication, B would know nothing about what is happening in A. In everyday life, a person usually perceives more than just one object, possibly many objects simultaneously. Therefore, knowing that a person is referring to a specific object and not one of the many other objects is not self-evident.

In Case 1, it should be stated: A person A says, “There is an object X with properties Y.” And another person B says, “Yes, I agree.”

When a person ‘says’ something that all participants recognize as ‘elements of language L,’ these elements of language L are ‘sounds,’ i.e., sound waves that are generated on one hand by a speaking organ (with a mouth) and received on the other side by an ‘ear.’ Let’s simply call the generating organ ‘actor’ and the receiving organ ‘sensor.’ Then, in verbal communication, a person produces sounds with an actor, and the participant of the communication receives these sounds through his sensor.

It is, of course, clear that the spoken and then also heard sounds of a language L have directly no relation to the internal processes of perception, remembering, and the ‘agreement process’ of perception and memory. However, it can be assumed that there must be ‘internal neural processes’ in the speaker and listener that must correspond to the generated sounds, otherwise the actor could not act.[1] In the case of the sensor, it was already pointed out earlier how stimuli from the outside world lead to activations of neurons, creating a flow of neural signals.

As it was generally assumed that there are neural signal flows and different abstract structures of objects that can be ‘internally’ stored and further processed, something similar must be assumed for the neural encoding of spoken and heard sounds. If one can distinguish elements and certain combinations of elements in the spoken acoustic sound material of a language, it is plausible to assume that these externally identifiable structures are also found in the internal neural realization.

The core idea of Hypothesis 3 can then be formulated as follows: There is a neural counterpart to the acoustically perceivable structure of a language L, which moreover is the ‘active’ part in producing spoken language and in ‘translating’ spoken sounds into the corresponding neural representations.

[1] The human speech organ is a highly complex system in which many systems work together, all of which must be neuronally controlled and coordinated.

Introduction of Hypothesis 4
With Hypothesis 2 (memory, comparison of perception and memory) and Hypothesis 3 (independent sound system of a language), the next Hypothesis 4 arises, suggesting that there must be some ‘relationship’ (mathematically: a mapping) between the sound system of a language L and the memorable objects along with current perception. This mapping allows ‘sounds’ to be connected with ‘objects (including properties)’ and vice versa.

In Case 1, person A has the perception of an object X with properties Y, along with a memory that ‘sufficiently matches,’ and person A says: “There is an object X with properties Y.” Another person B says, “Yes, I agree.”

Given the diversity of the world, constant changes, and the variety of possible sound systems [1], as well as the fact that humans undergo massive growth processes from embryo to continually developing person, it is unlikely that possible relationships between language sounds and perceived and remembered objects are ‘innate.’

This implies that this relationship (mapping) between language sounds and perceived and memorable objects must develop ‘over time,’ often referred to as ‘learning.’ Without certain presets, learning can be very slow; with available presets, it can be much faster. In the case of language learning, a person typically grows up in the presence of other people who generally already practice a language, which can serve as a reference system for growing individuals.

Language learning is certainly a lengthy process that includes not only individual acquisition but also inter-individual coordination among all those who practice a specific language L together.

As a result, learning a language means that not only is the ‘structure of the sound system’ learned, but also the association of elements of the sound system with elements of the perception-memory structure.[2]

In Case 1, therefore, person A must know which sound structure in the application group for language L is used for an object X with properties Y, and so must person B. If A and B have the same ‘relationship knowledge’ of sounds to objects and vice versa, person B can ‘understand’ the verbal expression of A “There is an object X with properties Y” because he also has a perception of this object and remembers an object X that sufficiently matches the perceived object, and he would name this fact in the same way A did. Then person B can say, “Yes, I agree.”

[1] Consider the many thousands of languages that still exist on planet Earth, where different languages can be used in the same living environment. The same ‘perception objects’ can thus be named differently depending on the language.

[2] The study of these matters has a long history with very, very many publications, but there is not yet a universally accepted unified theory.

‚Äď!! Not finished yet !!‚Äď

Reality Embedded in Virtuality

This text is part of the text “Rebooting Humanity)”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: May 29, 2024, Last change: June 5, 2024)

It addresses the paradox that, although we constantly feel like we are navigating in a real world, the ‘contents of our brain’ are not the ‘real world’ itself. Instead, these contents are the product of numerous neural transformation processes that convert ‘stimuli from the real world’ into ‘internal states,’ which we then treat as if they were the real world. This ‘as if’ is not a matter of free choice, as this situation results from the way our brain functions within our body. Through our body and brain, we are initially ‘locked-in’ systems.

This can also be illustrated by the fact that our body ‚ÄĒ as we all assume ‚ÄĒ finds itself in an everyday world consisting of many other bodies and objects with which our body ‘interacts’: we can move in the everyday world around and thereby change our ‘position’ in this world. We can touch, grasp, move, and alter objects. But these everyday objects can also act upon us: we perceive ‘smell,’ we ‘hear’ sounds, we ‘see’ shapes, colors, brightness, and much more.

This ‘perception’ of our everyday world through various ‘sense organs’ is by no means simple upon closer inspection: when visual stimuli hit our ‘eyes’ or acoustic stimuli hit our ‘ears,’ these physical stimuli from the everyday world are converted/transformed in the ‘sense organ’ into chemical state changes of nerve cells. These, in turn, are transformed into electrical potentials that can then propagate as ‘signals’ through further nerve cells. A ‘signal flow’ is created. The impressive thing about these signal flows is that they all have the same chemical-physical properties, regardless of whether they were triggered by visual or acoustic stimuli (or by other sense organs).

Whatever we perceive through our sense organs in conjunction with nerve cells, what then happens in our brain is not directly ‘the world out there’ as it is physically and chemically constituted, but the world as it has been transformed by our sense organs and the connected nerve cells into ‘neuronal signal flows’ that are further processed in the tissue of billions of nerve cells.

From the perspective of us humans, who have this body with its brain, these signal flows generate a ‘reality’ within us that we take as ‘face value,’ even though, compared to the external reality, it is only ‘virtual’, a ‘virtuality’. In this sense, one can say that the ‘reality of the external world’ appears in us as ‘virtuality,’ which is stimulated/induced in the domain of signal flows by the sense organs from the ‘reality of the external world.’

The Invasion of the Storytellers

Author: Gerd Doeben-Henisch

Changelog: April 30, 2024 – May 3, 2024

May 3,24: I added two Epilogs

Email: info@uffmm.org

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4 with manual modifications.

CONTEXT

Originally I wrote, that “this text is not a direct continuation of another text, but that there exist before various articles from the author on similar topics. In this sense, the current text is a kind of ‘further development’ of these ideas”. But, indeed, at least the text “NARRATIVES RULE THE WORLD. CURSE & BLESSING. COMMENTS FROM @CHATGPT4” ( https://www.uffmm.org/2024/02/03/narratives-rule-the-world-curse-blessing-comments-from-chatgpt4/ ) is a text, which can be understood as a kind of precursor.

In everyday life … magical links …

Almost everyone knows someone‚ÄĒor even several people‚ÄĒwho send many emails‚ÄĒor other messages‚ÄĒthat only contain links, links to various videos, of which the internet provides plenty nowadays, or images with a few keywords.

Since time is often short, one would like to know if it’s worth clicking on this video. But explanatory information is missing.

When asked about it, whether it would not be possible to include a few explanatory words, the sender almost always replies that they cannot formulate it as well as the video itself.

Interesting: Someone sends a link to a video without being able to express their opinion about it in their own words…

Follow-up questions…

When I click on a link and try to form an opinion, one of the first questions naturally is who published the video (or a text). The same set of facts can be narrated quite differently, even in complete contradiction, depending on the observer’s perspective, as evidenced and verifiable in everyday life. And since what we can sensually perceive is always only very fragmentary, is attached to the surfaces and is connected to some moment of time, it does not necessarily allow us to recognize different relationships to other aspects. And this vagueness is offering plenty of room for interpretation with each observation. Without a thorough consideration of the context and the backstory, interpretation is simply not possible … unless someone already has a ‘finished opinion’ that ‘integrates’ the ‘involuntary fragment of observation’ without hesitation.

So questioning and researching is quite ‘normal’, but our ‘quick brain’ first seeks ‘automatic answers’, as it doesn’t require much thought, is faster, requires less energy, and despite everything, this ‘automatic interpretation’ still provides a ‘satisfying feeling’: Yes, one ‘knows exactly what is presented’. So why question?

Immunizing…

As a scientist, I am trained to clarify all framework conditions, including my own assumptions. Of course, this takes effort and time and is anything but error-free. Hence, multiple checks, inquiries with others about their perspectives, etc. are a common practice.

However, when I ask the ‘wordless senders of links’, if something catches my attention, especially when I address a conflict with the reality I know, the reactions vary in the direction that I have misunderstood or that the author did not mean it that way at all. If I then refer to other sources that are considered ‘strongly verified’, they are labeled as ‘lying press’ or the authors are immediately exposed as ‘agents of a dark power’ (there is a whole range of such ‘dark powers’), and if I dare to inquire here as well, where the information comes from, then I quickly become a naive, stupid person for not knowing all this.

So, any attempt to clarify the basics of statements, to trace them back to comprehensible facts, ends in some kind of conflict long before any clarification has been realized.

Truth, Farewell…

Now, the topic of ‘truth’ has become even in philosophy unfortunately no more than a repository of multiple proposals. And even the modern sciences, fundamentally empirical, increasingly entangle themselves in the multitude of their disciplines and methods in a way that ‘integrative perspectives’ are rare and the ‘average citizen’ tends to have a problem of understanding. Not a good starting point to effectively prevent the spread of the ‘cognitive fairy tale virus’.

Democracy and the Internet as a Booster

The bizarre aspect of our current situation is that precisely the two most significant achievements of humanity, the societal form of ‘modern democracy’ (for about 250 years (in a history of about 300,000 years)) and the technology of the ‘internet’ (browser-based since about 1993), which for the first time have made a maximum of freedom and diversity of expression possible, that precisely these two achievements have now created the conditions for the cognitive fairy tale virus to spread so unrestrainedly.

Important: today’s cognitive fairy tale virus occurs in the context of ‘freedom’! In previous millennia, the cognitive fairy tale virus already existed, but it was under the control of the respective authoritarian rulers, who used it to steer the thoughts and feelings of their subjects in their favor. The ‘ambiguities’ of meanings have always allowed almost all interpretations; and if a previous fairy tale wasn’t enough, a new one was quickly invented. As long as control by reality is not really possible, anything can be told.

With the emergence of democracy, the authoritarian power structures disappeared, but the people who were allowed and supposed to vote were ultimately the same as before in authoritarian regimes. Who really has the time and desire to deal with the complicated questions of the real world, especially if it doesn’t directly affect oneself? That’s what our elected representatives are supposed to do…

In the (seemingly) quiet years since World War II, the division of tasks seemed to work well: here the citizens delegating everything, and there the elected representatives who do everything right. ‘Control’ of power was supposed to be guaranteed through constitution, judiciary, and through a functioning public…

But what was not foreseen were such trifles as:

  1. The increase in population and the advancement of technologies induced ever more complex processes with equally complex interactions that could no longer be adequately managed with the usual methods from the past. Errors and conflicts were inevitable.
  2. Delegating to a few elected representatives with ‘normal abilities’ can only work if these few representatives operate within contexts that provide them with all the necessary competencies their office requires. This task seems to be increasingly poorly addressed.
  3. The important ‘functioning public’ has been increasingly fragmented by the tremendous possibilities of the internet: there is no longer ‘the’ public, but many publics. This is not inherently bad, but when the available channels are attracting the ‘quick and convenient brain’ like light attracts mosquitoes, then heads increasingly fall into the realm of ‘cognitive viruses’ that, after only short ‘incubation periods,’ take possession of a head and control it from there.

The effects of these three factors have been clearly observable for several years now: the unresolved problems of society, which are increasingly poorly addressed by the existing democratic-political system, make individual people in the everyday situation to interpret their dissatisfaction and fears more and more exclusively under the influence of the cognitive fairy tale virus and to act accordingly. This gradually worsens the situation, as the constructive capacities for problem analysis and the collective strength for problem-solving diminish more and more..

No remedies available?

Looking back over the thousands of years of human history, it’s evident that ‘opinions’, ‘views of the world’, have always only harmonized with the real world in limited areas, where it was important to survive. But even in these small areas, for millennia, there were many beliefs that were later found to be ‘wrong’.

Very early on, we humans mastered the art of telling ourselves stories about how everything is connected. These were eagerly listened to, they were believed, and only much later could one sometimes recognize what was entirely or partially wrong about the earlier stories. But in their lifetimes, for those who grew up with these stories, these tales were ‘true’, made ‘sense’, people even went to their deaths for them.

Only at the very end of humanity’s previous development (the life form of Homo sapiens), so — with 300,000 years as 24 hours — after about 23 hours and 59 minutes, did humans discover with empirical sciences a method of obtaining ‘true knowledge’ that not only works for the moment but allows us to look millions, even billions of years ‘back in time’, and for many factors, billions of years into the future. With this, science can delve into the deepest depths of matter and increasingly understand the complex interplay of all the wonderful factors.

And just at this moment of humanity’s first great triumphs on the planet Earth, the cognitive fairy tale virus breaks out unchecked and threatens even to completely extinguish modern sciences!

Which people on this planet can resist this cognitive fairy tale virus?

Here’s a recent message from the Uppsala University [1,2], reporting on an experiment by Swedish scientists with students, showing that it was possible to measurably sharpen students’ awareness of ‘fake news’ (here: the cognitive fairy tale virus).

Yes, we know that young people can shape their awareness to be better equipped against the cognitive fairy tale virus through appropriate education. But what happens when official educational institutions aren’t able to provide the necessary eduaction because either the teachers cannot conduct such knowledge therapy or the teachers themselves could do it, but the institutions do not allow it? The latter cases are known, even in so-called democracies!

Epilog 1

The following working hypotheses are emerging:

  1. The fairy tale virus, the unrestrained inclination to tell stories (uncontrolled), is genetically ingrained in humans.
  2. Neither intelligence nor so-called ‘academic education’ automatically protect against it.
  3. Critical thinking’ and ’empirical science’ are special qualities that people can only acquire with their own great commitment. Minimal conditions must exist in a society for these qualities, without which it is not possible.
  4. Active democracies seem to be able to contain the fairy tale virus to about 15-20% of societal practice (although it is always present in people). As soon as the percentage of active storytellers perceptibly increases, it must be assumed that the concept of ‘democracy’ is increasingly weakening in societal practice ‚ÄĒ for various reasons.

Epilog 2

Anyone actively affected by the fairy tale virus has a view of the world, of themselves, and of others, that has so little to do with the real world ‘out there’, beyond their own thinking, that real events no longer influence their own thinking. They live in their own ‘thought bubble’. Those who have learned to think ‘critically and scientifically’ have acquired techniques and apply them that repeatedly subject their thinking within their own bubble to a ‘reality check’. This check is not limited to specific events or statements… and that’s where it gets difficult.

References

[1] Here’s the website of Uppsala University, Sweden, where the researchers come from: https://www.uu.se/en/press/press-releases/2024/2024-04-24-computer-game-in-school-made-students-better-at-detecting-fake-news

[2] And here’s the full scientific article with open access: “Bad News in the civics classroom: How serious gameplay fosters teenagers‚Äô ability to discern misinformation techniques.” Carl-Anton Werner Axelsson, Thomas Nygren, Jon Roozenbeek & Sander van der Linden, Received 26 Sep 2023, Accepted 29 Mar 2024, Published online: 19 Apr 2024: https://doi.org/10.1080/15391523.2024.2338451

NARRATIVES RULE THE WORLD. Curse & blessing. Comments from @chatGPT4

Author: Gerd Doeben-Henisch

Time: Febr 3, 2024 ‚Äď Febr 3, 2024, 10:08 a.m. CET

Email: gerd@doeben-henisch.de

TRANSLATION & COMMENTS: The following text is a translation from a German version into English. For the translation I am using the software deepL.com. For commenting on the whole text I am using @chatGPT4.

CONTEXT

This text belongs to the topic Philosophy (of Science).

If someone has already decided in his head that there is no problem, he won’t see a problem ‚Ķ and if you see a problem where there is none, you have little chance of being able to solve anything.
To put it simply: we can be the solution if we are not the problem ourselves

Written at the end of some letter…

PREFACE

The original – admittedly somewhat long – text entitled “MODERN PROPAGANDA – From a philosophical perspective. First reflections” started with the question of what actually constitutes the core of propaganda, and then ended up with the central insight that what is called ‘propaganda’ is only a special case of something much more general that dominates our thinking as humans: the world of narratives. This was followed by a relatively long philosophical analysis of the cognitive and emotional foundations of this phenomenon.

Since the central text on the role of narratives as part of the aforementioned larger text was a bit ‘invisible’ for many, here is a blog post that highlights this text again, and at the same time connects this text with an experiment in which the individual sections of the text are commented on by @chatGPT4.

Insights on @chatGPT4 as a commentator

Let’s start with the results of the accompanying experiment with the @chatGPT4 software. If you know how the @chatGPT4 program works, you would expect the program to more or less repeat the text entered by the user and then add a few associations relative to this text, the scope and originality of which depend on the internal knowledge that the system has. As you can see from reading @chatGPT4’s comments, @chatGPT4’s commenting capabilities are clearly limited. Nevertheless they underline the main ideas quite nicely. Of course, you could elicit even more information from the system by asking additional questions, but then you would have to invest human knowledge again to enable the machine to associate more and origin of all. Without additional help, the comments remain very manageable.

Central role of narratives

As the following text suggests, narratives are of central importance both for the individual and for the collectives in which an individual occurs: the narratives in people’s heads determine how people see, experience and interpret the world and also which actions they take spontaneously or deliberately. In the case of a machine, we would say that narratives are the program that controls us humans. In principle, people do have the ability to question or even partially change the narratives they have mastered, but only very few are able to do so, as this not only requires certain skills, but usually a great deal of training in dealing with their own knowledge. Intelligence is no special protection here; in fact, it seems that it is precisely so-called intelligent people who can become the worst victims of their own narratives. This phenomenon reveals a peculiar powerlessness of knowledge before itself.

This topic calls for further analysis and more public discussion.

HERE COMES THE MAIN TEXT ABOUT NARRATIVES

Worldwide today, in the age of mass media, especially in the age of the Internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [1] when they act. A typical feature of acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that the narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling of being in the right’ can go as far as claiming the right to kill others because they are ‘acting wrongly’ according to the ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that is ‘found to be good’ by the followers of the narrative ‘as such’, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.[2]

Popular narratives

In recent decades, we have experienced ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘making sense’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly effect anything: the ‘people up there’ do what they want after all. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’! Such ‘popular narratives’, which make ‘good feelings’ possible, are becoming increasingly powerful. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most are also not sufficiently ‘trained’ to be able to clarify the truth content of a narrative at all. It is typical for followers of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are such ‘decent people’, because they always say exactly the same thing as the ‘popular narrative’ dictates. [3]

For Power, narratives are sexy

If one now also takes into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people to ‘create’ precisely such narratives or to ‘instrumentalize’ existing narratives for themselves, then one should not be surprised that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately this misses the core of the problem. [4]

Bits of history

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text show – are highly complex and extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. In addition, a real revolution took place about 500 years ago: humanity succeeded in finding a format that optimized the ‘preservation of truth’ and minimized the slide into untruth with the concept of a ‘verifiable narrative (empirical theory)’. This new concept of ‘verifiable truth’ has since enabled great insights that were beyond imagination without this new concept. [5]

The ‘aura of the scientific’ has now permeated almost all of human culture, almost! We have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are so attractive because they spare the individual from having to ‘think for himself’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.[6]

The central problem of humanity

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem for humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there exist no ‘simple off-the-shelf’ solutions for them, that we really have to ‘wrestle’ together for possible insights; the ‘old’ is not ‘enough’ in principle to recognize and implement the ‘true new’! [7]

COMMENTS

[1] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

Comments from @chatGPT4

[2] @chatGPT4 : The text delves into the profound impact of narratives in the context of mass media and the Internet, highlighting how individuals and groups often align their actions and beliefs with specific narratives. It points out the subjective nature of these narratives, where followers perceive them as absolute truths, driving their decisions and actions. This belief in the “narrative truth” can foster a strong sense of justification, even to the extent of committing violence against those deemed to be in opposition to this truth. The text emphasizes the power of narratives to shape perceptions of the world, often bypassing critical scrutiny of their factual accuracy. This phenomenon underscores the psychological and social mechanisms through which narratives influence human behavior, suggesting a collective suspension of disbelief that can have significant real-world implications.

The assertion that individuals often do not question the “truth content” of a narrative they find meaningful speaks to the cognitive biases and shortcuts humans use in processing information. Narratives simplify complex realities into coherent stories, making them more digestible and emotionally resonant. This process can lead to echo chambers and polarization, as groups coalesce around shared narratives, reinforcing their beliefs and dismissing opposing viewpoints without thorough examination.

The concept of “narrative truth” also reflects the construction of social realities, where shared beliefs and stories shape collective experiences and norms. This can have both positive and negative outcomes, facilitating social cohesion and identity within groups but also fostering division and conflict between them.

Critically, the text suggests a need for awareness and reflection on the narratives we adopt and their implications for our perceptions and actions. It hints at the ethical responsibility of individuals and media creators to critically evaluate and challenge the narratives they encounter and propagate. In an era of information overload and sophisticated digital communication, the ability to discern the nuanced difference between narrative-driven beliefs and objective truths is more crucial than ever for fostering informed, compassionate, and inclusive societies.

[3] @chatGPT4 : The text provides a critical examination of the role and influence of modern narratives in contemporary society, juxtaposing them with traditional religious narratives while highlighting their similar effects on individuals’ perceptions and emotions. It touches on a growing sense of political helplessness among citizens, even within democratic societies, where there’s a prevailing sentiment that individual actions have little to no impact on the decisions made by those in power. This perceived powerlessness, combined with the complexity and threats perceived in today’s world, creates a fertile ground for simplistic narratives to thrive. These narratives, by virtue of offering clarity and emotional comfort, gain traction and become increasingly influential.

The comparison to religious narratives is particularly insightful. It suggests that just as religious stories provide a sense of meaning, belonging, and understanding without necessitating a rigorous examination of their factual accuracy, modern narratives fulfill a similar psychological need. They offer an immediate, albeit superficial, sense of understanding and control in a seemingly chaotic world. This bypass of critical evaluation‚ÄĒthe reluctance or inability to question the ‘truth’ behind these narratives‚ÄĒpoints to a significant challenge in contemporary discourse: the erosion of critical thinking and analytical skills in evaluating information.

Furthermore, the text highlights a concerning trend in how people consume and share information. The reliance on ‘echo chambers’ where one’s beliefs are constantly reaffirmed by similar opinions, often without scrutiny of the source’s credibility, perpetuates a cycle of misinformation and intellectual complacency. This behavior is facilitated by digital platforms, where sharing links to supportive texts or videos substitutes for genuine understanding and critical discussion.

The underlying issue, as the text implies, is not just about the narratives themselves but about the broader societal and educational failings that leave individuals unequipped to critically assess information. This deficiency not only makes people susceptible to manipulation but also undermines the foundational principles of informed citizenship in a democracy.

In conclusion, the text calls for a renewed emphasis on critical thinking and media literacy as essential skills in navigating the modern information landscape. By fostering these skills, individuals can become more discerning consumers of information, better able to understand the complexities of the world around them, and more effectively engage in the democratic process. This approach not only challenges the dominance of simplistic narratives but also supports a healthier, more informed public discourse.

[4] @chatGPT4 : This passage delves into the strategic utilization of narratives by those in power or seeking power as a means to influence and control populations. It presents a critical view of how governments and various power groups craft or harness existing popular narratives to shape public opinion and behavior indirectly. This approach, as opposed to direct coercion, operates through the subtler channels of storytelling and narrative dissemination, aiming to capture the “hearts and minds” of people. The text suggests that this method is widespread and increasingly recognized under terms like ‘hybrid warfare’ and ‘modern propaganda,’ though it argues these terms fail to fully capture the essence of the issue.

The key insight here revolves around the power of narratives to mold societal perceptions and values. By creating or modifying narratives, those in power can guide public sentiment and opinion in ways that serve their interests, often without the overt appearance of manipulation. This strategy can be particularly effective in the digital age, where information spreads rapidly and often without critical scrutiny, allowing for narratives to be amplified and tailored to target audiences with unprecedented precision.

The critique of labeling these tactics as merely ‘hybrid warfare’ or ‘modern propaganda’ highlights a deeper concern: that the focus on the mechanics of these strategies may overlook the fundamental challenge they pose to democratic processes and individual autonomy. Narratives, in this context, become tools for shaping reality, not just reflecting it, with profound implications for how people understand their world and their place within it.

Furthermore, the text implicitly raises questions about the ethical implications of such narrative control. It challenges the reader to consider the boundaries between legitimate persuasion and manipulative interference in the democratic discourse. This distinction is crucial in evaluating the health of public dialogue and the integrity of decision-making processes in a society.

In summary, the passage underscores the sophisticated ways in which narratives are employed by those in power to influence public opinion and behavior indirectly. It calls attention to the importance of recognizing and critically evaluating these narratives to preserve the principles of democracy and individual agency. The ability to discern the motivations behind and the content of these narratives is essential for a healthy, informed, and resilient public sphere.

[5] @chatGPT4 : This passage addresses a fundamental aspect of human social organization: the reliance on narratives to coordinate collective action. It posits that narratives are an intrinsic part of human nature, offering the only viable means through which communities can organize and mobilize collective action. However, it also acknowledges a critical vulnerability of narratives‚ÄĒtheir propensity for falsity and distortion of reality. This inherent flaw raises concerns about the accuracy and integrity of the stories that guide collective human behavior and decision-making.

The text suggests that the development of legal systems represents a historical attempt to mitigate the misuse of narrative power, aiming to safeguard society against the dangers of misinformation and manipulation. Legal frameworks, with their emphasis on evidence and accountability, can be seen as societal mechanisms designed to preserve truth and curb the abuse of narratives. Despite the ongoing challenges and imperfections in these systems, they have contributed to a gradual improvement in societal governance.

Moreover, the passage highlights a transformative moment in human history approximately 500 years ago with the emergence of the scientific method, described here as the concept of a “verifiable narrative (empirical theory).” This revolution in thought introduced a new paradigm for establishing truth, one based on empirical evidence and verifiability rather than mere assertion or belief. The scientific method, by demanding that claims about the world be tested and confirmed through observation and experiment, has dramatically enhanced our capacity to discern truth from falsehood. This approach has unlocked unprecedented insights and advancements, fundamentally altering humanity’s understanding of the universe and our place within it.

The core issue identified in the text is not merely the existence of narratives but the challenge of ensuring they reflect a truthful and accurate understanding of the world. The scientific method represents a pivotal development in this quest, providing a robust framework for distinguishing between verifiable truths and unfounded narratives. However, the text implies that the struggle to maintain the integrity of our collective narratives is ongoing and requires constant vigilance. As we navigate an ever-more complex information landscape, the principles of verifiability and empirical evidence remain essential guides in our pursuit of knowledge and in the organization of our societies.

[6] @chatGPT4 : This passage critically examines the influence and reach of scientific thinking in contemporary culture, juxtaposing it with the resilience and resurgence of non-scientific narratives. Despite the profound impact of scientific discoveries and technologies on practical aspects of life, the passage suggests that the mode of thinking promoted by science‚ÄĒcharacterized by critical inquiry, empirical evidence, and skepticism‚ÄĒhas not fully supplanted other forms of narrative understanding. Instead, there is a noted resurgence of “non-truth narratives” that challenge, marginalize, or even seek to suppress scientific perspectives in various domains.

The text highlights a concerning trend: the growing appeal of irrational narratives derived from religious beliefs or popular culture, which are gaining ground against scientific rationality. This preference for irrational narratives is attributed to their simplicity and the comfort they offer individuals by absolving them of the need for critical thinking. The passage argues that real thinking‚ÄĒanalytical, reflective, and independent‚ÄĒis perceived as burdensome, unpopular, and inconvenient by many, as it complicates the desire for simple answers to complex problems.

The allure of irrational narratives, according to the passage, lies in their ability to provide straightforward solutions and a sense of certainty in an increasingly complex world. These narratives offer emotional satisfaction and a framework for understanding the world that bypasses the rigorous demands of scientific scrutiny. This dynamic illustrates a broader cultural conflict between the values of rationality and the comfort of simplicity and certainty offered by alternative narratives.

This critique underscores a critical challenge facing society: the need to balance the benefits of scientific thinking with the human desire for narratives that provide meaning and emotional comfort. While scientific thinking has undeniably transformed the world, its principles of skepticism and demand for evidence can be at odds with the human inclination towards narratives that offer immediate clarity and solace. The passage calls for a reflection on the importance of fostering a culture that values critical thinking and scientific literacy while acknowledging and addressing the psychological and emotional needs that drive people towards irrational narratives. This balance is crucial for navigating the complexities of the modern world and ensuring that decisions are informed by both empirical evidence and a compassionate understanding of human nature.

[7] @chatGPT4 : This passage articulates a critical observation about contemporary society’s struggle with global challenges: the difficulty lies not just in the need for technological advancements but fundamentally in the widespread inability of individuals to engage critically with information and narratives. It underscores the necessity of fostering a culture where people are encouraged and equipped to think deeply, critically, and collaboratively to address complex issues. The emphasis is on the collective intellectual effort required to navigate and solve “real problems,” which are characterized as novel, lacking straightforward solutions, and demanding innovative approaches beyond the application of existing knowledge or methods.

The text suggests that the crux of addressing global challenges effectively is not merely technological innovation but, more importantly, enhancing collective cognitive capabilities. It highlights an urgent need for individuals who can think “more and better” and who are prepared to engage in the collaborative problem-solving processes essential for addressing the unique and multifaceted problems facing the world today. These problems are not amenable to simple, pre-packaged solutions; they require a willingness to engage in sustained intellectual effort, open-minded exploration, and creative thinking.

This perspective challenges the prevailing emphasis on technological solutions as the primary means to overcome global challenges. Instead, it argues for a paradigm shift towards valuing and developing human intellectual and collaborative capacities as the foundational step towards sustainable progress. The passage points out that the real bottleneck in addressing contemporary issues is not the lack of technological resources but the shortage of people capable of employing critical thinking to innovate and implement effective solutions.

In essence, the passage calls for a renewed focus on education and cultural values that promote critical thinking, intellectual curiosity, and collaborative problem-solving. By cultivating these capabilities, society can better equip itself to recognize, understand, and address the novel and complex challenges that define our times. This approach not only recognizes the limitations of relying solely on technological advancements but also highlights the transformative potential of human intellect and cooperation in forging pathways to a better future.