ChatGPT4: How do you define the concept ’empirical theory’?

Last Change: July 16, 2024

Inquirer: Gerd Doeben-Henisch

Email: info@uffmm.org

CONTEXT

This post is part of the Dialogues with chatGPT4.

How do you define the concept ’empirical theory’?

SUMMARY

At the end of the dialogue the inquirer stated: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

chatGPT4: Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

Protocol of the Dialogue

QUESTION: How do you define the concept ’empirical theory’?

The concept of an “empirical theory” generally refers to a theory that is grounded in and supported by observations and data derived from real-world experiences, experiments, or measurements, rather than purely theoretical or abstract reasoning. Here are some key aspects that define an empirical theory:

1. **Data-Driven**: An empirical theory is based on data collected through observations, experiments, or other forms of empirical research. The theory must be testable and measurable against the natural world.

2. **Predictive Power**: It typically provides a framework that not only explains current observations but also predicts future occurrences or behaviors. This predictive aspect is crucial for the validation and utility of the theory.

3. **Falsifiability**: A core principle in empirical theories is that they must be falsifiable. This means there must be some conceivable observation or experiment that could refute the theory if it were not true, making it subject to potential disproof.

4. **Replicability**: The results that support an empirical theory should be replicable by independent studies. This is important to ensure that the theory is robust and not the result of experimental errors or biases.

5. **Revision and Refinement**: Empirical theories are not static; they evolve over time as new data become available and as old assumptions are tested. Theories may be refined or discarded depending on how well they continue to match observations.

Empirical theories are fundamental in sciences such as physics, chemistry, biology, and social sciences, where theories must be grounded in observable phenomena and subject to experimental validation.

QUESTION: You mention as one key property of an empirical theory ‘Predictive Power’. What are the Formalisms within a theory to enable ‘predictive power’?

The concept of “predictive power” in an empirical theory is central to its utility and credibility. Predictive power allows a theory to forecast future events or phenomena based on existing data and principles. Several formalisms within a theory enable this predictive capability:

1. **Mathematical Models**: Many scientific theories rely on mathematical models that describe relationships between variables quantitatively. These models often take the form of equations that can be used to make predictions. For example, Newton’s laws of motion use differential equations to predict the future position and velocity of moving objects.

2. **Statistical Models**: These models use statisticalHow do you define the concept ’empirical theory’?

methods to infer and predict outcomes based on data. Regression analysis, for example, can predict a dependent variable based on one or more independent variables. Machine learning algorithms also fall into this category, where models are trained on historical data to predict outcomes from new data.

3. **Hypotheses and Propositions**: Theories often include specific hypotheses that are testable statements derived from general principles of the theory. These hypotheses can be used to predict outcomes in specific situations, which can then be tested through experiments or observations.

4. **Simulations**: In some fields, particularly in complex systems like climate science or economics, simulations play a crucial role in prediction. These simulations are typically run on computational models that incorporate many variables and their interactions to forecast future states.

5. **Algorithmic Rules**: Especially in theories related to computer science or systems theory, algorithmic rules that specify sequences of actions or decisions based on input conditions can predict outcomes of processes or the behavior of systems.

6. **Causal Inference**: Identifying causal relationships within data allows theories to predict the effects of changing one variable on another. This is fundamental in fields like epidemiology and economics, where understanding cause and effect is crucial for making predictions.

These formalisms are not only instrumental in predicting future events but also in confirming the validity of the theories themselves. By successfully predicting outcomes, theories validate their underlying assumptions and enhance their credibility and acceptance in the scientific community.

QUESTION. You mention about six different formalisms to enable predictions within an empirical theory. But these six formalisms are as such formally not compatible. Does this mean that we are practicing six different kinds of theories which are ‘incompatible’?

The presence of various formalisms within empirical theories, each suitable for different types of data, phenomena, or disciplines, does not necessarily imply that these theories are “incompatible” in the broader sense of scientific inquiry. Instead, it reflects the diverse nature of the world and the complexity of phenomena that science seeks to understand and predict. Here’s how these differences can be viewed:

1. **Complementarity**: Different formalisms often address different aspects of phenomena or work best under different conditions. For instance, statistical models may be best for systems where direct causal relationships are difficult to measure due to noise or complexity, while causal inference is preferred in situations where the relationship between variables is clear and measurable. These approaches are complementary rather than incompatible.

2. **Interdisciplinary Integration**: In many fields, insights from multiple formalisms are integrated to form a more comprehensive understanding. For example, in climate science, mathematical models of physical processes are combined with statistical models derived from historical data to make more accurate predictions about future climate conditions.

3. **Scalability and Applicability**: Some formalisms are more applicable at certain scales or in certain contexts. For instance, algorithmic rules might be most applicable in computer science and certain areas of engineering, while mathematical models are often used in physics and chemistry across various scales.

4. **Evolution of Theories**: Theories can evolve over time as new tools and formalisms are developed. This evolution can lead to the merging of previously distinct approaches or the refinement of existing theories to incorporate new findings.

5. **Methodological Pluralism**: The diversity of formalisms is a strength that allows researchers to tackle complex questions from multiple angles. Methodological pluralism acknowledges that no single approach can capture all aspects of complex realities, and different methods can provide different insights that are valid within their own scopes.

In conclusion, rather than viewing different formalisms as leading to incompatible theories, it’s more productive to see them as different tools in a toolkit, each useful for certain types of problems. The integration of these tools across disciplines and their ongoing development is part of what drives scientific progress.

QUESTION: That ‘different formalisms’ can be seen as ‘each useful for certain types of problems’ is OK. But this answers not the basic question, how to ‘integrate different formalisms’. Do you know any kind of strategy which solves the problem on a general formal level?

Integrating different formalisms from diverse scientific disciplines is a complex challenge that often requires a multi-faceted approach. Here are some strategies that can help address this issue on a more formal level:

### 1. **Unified Modeling Frameworks**

One approach to integrating different formalisms is to use unified modeling frameworks that can accommodate various types of models. For example, systems theory and network theory provide frameworks that can model physical, biological, social, and information systems. These frameworks allow different types of data and relationships to be incorporated into a cohesive model. Examples include agent-based modeling and system dynamics, which can integrate both qualitative and quantitative data.

### 2. **Meta-Analysis and Systematic Reviews**

In areas like medicine and social sciences, meta-analysis and systematic reviews are used to synthesize results from multiple studies that may use different methodologies and formalisms. This approach provides a way to quantitatively combine data from different sources to arrive at more general conclusions.

### 3. **Interdisciplinary Collaboration**

Active collaboration across disciplines can facilitate the integration of different formalisms by leveraging the expertise of specialists in each area. This might involve joint research projects, interdisciplinary teams, and cross-disciplinary training programs that encourage a deeper understanding and adaptation of diverse methodologies.

### 4. **Mathematical and Statistical Bridging Methods**

Techniques such as Bayesian statistics, Monte Carlo methods, and other computational approaches can serve as bridges between different formalisms. These methods allow for the combination of data and predictions from different types of models, even when these models are based on different foundational assumptions.

### 5. **Semantic Integration**

This involves developing a common vocabulary and set of concepts that can be shared across disciplines. Ontologies and semantic web technologies are useful in this context, as they help define relationships between different concepts and facilitate data integration across diverse datasets.

### 6. **Hybrid Models**

Creating hybrid models that incorporate elements from different formalisms can be effective, especially in engineering and environmental science. For example, combining physical laws with empirical data within a computational model can yield predictions that are more accurate and robust than those obtained from a single formalism.

### 7. **Philosophical and Methodological Reconciliation**

Addressing underlying philosophical and methodological differences is crucial for integrating formalisms. This might involve developing new methodologies that acknowledge and bridge epistemological differences, such as combining qualitative and quantitative research methods.

### Conclusion

While integrating different formalisms poses significant challenges, these strategies offer pathways to build more comprehensive and universally applicable theories. The process is inherently iterative and requires ongoing dialogue between different scientific domains. This integration is essential for tackling complex, multifaceted problems that no single discipline can solve alone.

QUESTION: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

The complexity of integrating formalisms arises from several factors:

1. **Epistemological Differences**: Different disciplines often operate under fundamentally different assumptions and theories about the nature of knowledge and reality. These differences can make it challenging to create a common framework that is accepted and useful across disciplines.

2. **Methodological Diversity**: Each field has developed its methodologies and tools that are optimized for its specific types of data and problems. Bridging these methodological gaps requires not just technical solutions but often a rethinking of foundational principles.

3. **Data Incompatibility**: Diverse types of data, from quantitative measurements to qualitative observations, can be difficult to integrate due to differences in scale, units, and interpretations.

4. **Communication Barriers**: Effective interdisciplinary collaboration, which is crucial for integrating formalisms, is often hindered by differences in terminology, priorities, and research cultures.

The strategies I mentioned (unified modeling frameworks, interdisciplinary collaboration, hybrid models, etc.) represent efforts to address these challenges, but they are more about mitigating the difficulties rather than completely solving them. Progress is being made, especially with the advent of advanced computational techniques and a growing emphasis on interdisciplinary research, but fully integrating diverse scientific formalisms into a coherent, universally applicable framework remains an aspirational goal rather than a current reality.

Thus, while significant strides can be and have been made in specific contexts or projects, the broader challenge persists and is a dynamic field of academic inquiry and practical application.

ANSWER of INQUIRER: Thanks for this dialogue 🙂

You’re welcome! I’m glad we could discuss this topic together. If you have any more questions or need further clarification on anything else, feel free to ask. Happy thinking! 😊

Self-destruction is an option. Note

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 14, 2024, Last change: July 14, 2024)

Starting Point

We are still here on this planet. However, the fact that our ‘human system’ largely programs itself without our ‘voluntary participation’ can become our downfall within the framework of a democracy and in the age of the internet; not necessarily, but highly likely…

Self-programming

Unlike today’s machines, all living beings — including humans — are designed to ‘self-program’: whatever we perceive of ourselves, others, and the environment, is automatically transported into our interior and there it is also largely automatically structured, arranged, evaluated, and much more. No one can resist it. One can only control this ‘self-programming’ by shaping one’s environment in such a way that certain perceptions are less likely to occur, while others occur more. Educational processes assume this capacity for self-programming and they also decide what should happen during the education.

Dictatorship of the ‘Is There’

What has arrived within us is there for the time being. It forms our primary reality. When we want to act, we first rely on what is there. What is there is somehow ‘true’ for us, shaping our further perception and understanding. Something ‘around us’ that is ‘different’ is indeed ‘different’ and ‘does not fit’ with our inner truth.

Which points of comparison?

Suppose the majority of what is ‘inside us’, what we initially assume to be ‘true’, were ‘false’, ‘inappropriate’, ‘inaccurate’, etc., in the real world outside, we would have little chance of recognizing our own ‘untruth’ as long as most people around us share the same ‘untruth’.[1] Recognizing ‘untruth’ presupposes that one somehow has ‘examples of truth’ that are suitable for being ‘compared’ with the prevailing untruth. However, the presence of examples of truth does not guarantee the recognition of untruth, but only increases the likelihood that it might happen.[2]

[1] Throughout the known history of humanity, we can observe how certain ‘untruths’ were able to dominate entire peoples, not only in autocratic systems.

[2] Systems with state-promoted untruth can be identified, among other things, by the suppression of diversity, examples of truth, and the allowance of only certain forms of opinions.

Modern Untruths

Unlike autocratic systems, democratic systems officially have ‘freedom of speech’, which allows for great diversity. In democratic systems of the Democracy 1.0 format, it is assumed that this guaranteed freedom is not abused.[1]

With the advent of modern media, especially media in conjunction with the internet, it is possible to make money on a large scale with the distribution of media. The temptation is near to spread media over the internet in such a way that one can earn the maximum amount of money. A popular method is ‘advertising’: the longer and more often a user stays in front of content, the more advertising revenue flows. The temptation is great enough to offer the user only what automatically arouses his ‘automatic interest’. The fact that ‘automatic interest’ is a very strong motive and specifically correlates with content that does not require much thought is confirmed daily. It is now known and increasingly described that large parts of a population can be ‘specially programmed’ in this way.

In the ‘struggle of power-driven systems’, this possibility of external programming of people using the internet is massively exploited in the so-called ‘hybrid warfare’. While autocratic systems are ‘massively closed’, the modern democracies in the 1.0 format are almost an Eldorado for the application of hybrid warfare methods. Similar to the money-obsessed media industry, hybrid warfare also uses ‘light content’, mixing fragments of ‘true’ with fragments of ‘false’, particularly those that easily excite, and in a short time, the ‘flock of believers’ in these messages grows.[2] The ‘congregation of these propaganda believers’ cannot usually be influenced by ‘arguments’. These convictions are programmed in such a way that all sources that could represent critical alternatives are ‘outlawed’ from the start.[3]

And unfortunately, it is true that Democracies in the 1.0 format appear ‘weak’ and ‘helpless’ so far for this type of freedom use, although slowly the recognition is increasing that there is something like an abuse through ‘false programming of people’.

[1] Preventing systematic abuse of freedom in Democracies 1.0 is difficult to impossible without changing freedom of speech itself.

[2] The broad coverage of this propaganda can be easily recognized when one talks to people in different places in Germany (and abroad!) who do not know each other, but who tell more or less the same stories in a tone of conviction during the course of the conversation. Many (most of them?) even have higher education. This raises the question of how little an academic education apparently promotes a ‘critical spirit’.

[3] A popular term is ‘lying press’. Anything that could become ‘dangerous’ is ‘a lie’, although those who talk about lying press do not seriously engage with this press at all.

What is the probability of survival of truth?

Anyone who has ever delved deeply into the question of ‘truth’ in their life, and who knows that it takes various research, investigations, considerations, and even experiments to ‘look behind the apparent phenomena’, along with much communication with other people, sometimes in other languages, knows that truth is not automatic; truth does not just happen; truth cannot be obtained for ‘free’. The use of truth for beneficial technologies, new forms of agriculture, new transportation systems, etc., appears desirable in retrospect, at the end of a long journey, when it somehow becomes ‘obvious’ what all this is good for, but at the beginning of the path, this is almost unrecognizable to everyone. The pitiable state of the education system in many countries is a telling testament to the low regard for education as a training process for truth.

Given the rapid spread of unscrupulous internet media businesses accompanied by a worldwide surge in hybrid warfare, the survival probability of truth seems to be decreasing. Democracies, as the actual ‘bastions of truth’, are experiencing themselves as a place of accelerated ‘evaporation of truth’.

Collective Knowledge: Generative AI in chatbot format as a helper

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 10, 2024, Last change: July 10, 2024)

Starting Point

As the texts of the book will gradually show, the term ‘collective knowledge’ represents a crucial keyword for a characteristic that deeply defines humans—the life form of ‘homo sapiens.’ For an individual, ‘collective knowledge’ is directly hardly perceivable, but without this collective knowledge, no single human would have any knowledge at all. Yes, this not only sounds like a paradox, it is a paradox. While a ‘contradiction’ between two different statements represents a factual incompatibility, a ‘paradox’ also conveys the impression of a ‘contradiction,’ but in fact, in terms of the matter, it is not an ‘incompatibility.’ The ‘knowledge of us individuals’ is real knowledge, but due to the finiteness of our bodies, our perception system, our memory, we can factually only gather a very small amount of knowledge ‘in us.’ However, the more people there are, the more ‘knowledge’ each person ‘produces’ daily—just analogously, or then also digitally—the greater grows the amount of knowledge that we humans ‘deposit’ in our world. Newspapers, books, libraries, databases can collect and sort this knowledge to a limited extent. But an individual can only find and ‘process’ small fractions of this ‘collected knowledge.’ The gap between the ‘available collected knowledge’ and the ‘individually processable knowledge’ is constantly growing. In such a situation, the availability of generative artificial intelligence in the format of chatbots (GAI-ChaBo) is almost an ‘evolutionary event’! This new technology does not solve all questions, but it can help the individual to ‘principally’ get a novel direct access to what we should call ‘collective knowledge of humanity.’

Before the digitalization of the world …

“Before the digitalization of the world, it was indeed laborious to convey thoughts and knowledge in a way that others could become aware of it: initially only through oral traditions, inscriptions in rocks, then parchment and papyrus, stones and clay with inscriptions. With the availability of paper, writing became easier (though there was the problem of durability); this led to the collection of texts, to books, and the first libraries with books (libraries existed even for cuneiform and clay tablets). Great libraries like the ‘Library of Alexandria’ became precious ‘collection points of knowledge,’ but they were also subjected to various destructive events during their existence, which could lead to great losses of recorded knowledge.

A ‘mechanized production of books’ has been around since the 8th century, and modern book printing began in the 15th century. The development of libraries, however, progressed slowly for a long time, often only on a private basis. It was not until the 19th century that there was a stronger development of the library system, now including public libraries.

Despite this development, it remained difficult for an individual to access knowledge through a library, and even if this (usually privileged) access existed, the availability of specific texts, their inspection, the making of notes—or later copies—was cumbersome and time-consuming. The access of the individual reader resembled small ‘sampling’ that even within the framework of scientific work remained very limited over the years. The language problem should not be overlooked: the proportion of ‘foreign-language books’ in the library of a country A was predominantly restricted to texts in the language of country A.

‘Acquisition of knowledge’ was therefore laborious, time-consuming, and very fragmented for an individual.

An increasingly important alternative to this hard-to-access field of library knowledge were modern magazines, journals, in many languages, with ever shorter ‘knowledge cycles.’ However, the more such journals there are, the more the natural individual limitations come into force, painfully felt in the face of the swelling journal knowledge. Currently (2024), it is hardly possible to estimate the exact number of scientific journals. In the field of computer science alone, approximately 2,000 journals are estimated with an average of about 25,000 (or more) articles per year. And scientific journals only in Chinese are stated to be over 10,000.[1]

[1] For more detailed information on the collection of Chinese journals, you can visit the East View page on China Academic Journals (CAJ) here.

With digitalization

Since the availability of the World Wide Web (WWW) in the 1990s, a unified information space has emerged that has continued to spread globally. Although we are currently witnessing an increasing ‘isolation’ of the WWW among countries, the development of a common information space is unstoppable.

Alongside this information space, technologies for ‘collecting,’ ‘storing,’ ‘retrieving,’ and ‘analyzing’ data have also evolved, making it increasingly possible to find ‘answers’ to ‘questions’ from ever more sources.

With the advent of so-called ‘Generative Artificial Intelligence in the format of Chatbots’ (GAI-ChaBo) since 2022, this ‘data utilization technology’ has reached a level that not only finds ‘raw data’ but also allows an individual user with their limited knowledge direct access to ‘collective human knowledge,’ provided it has been digitized.

For the ‘evolution of life on this planet,’ this availability of collective knowledge to the individual may be the most significant event since the appearance of Homo sapiens itself about 300,000 years ago. Why?

The next level?

The sustainability debate over the last approximately 50 years has contributed to the realization that, alongside the rather individual perspective of life, and strong regional or national interests and perspectives of success, there have gradually come into consciousness — not universally — perspectives that suggest — and are now substantiated by diverse data and models — that there are problems which exceed the event horizon of individual particular groups — and these can be entire nations. Many first think of ‘resources’ that are becoming scarce (e.g., fish stocks), or being polluted (world’s oceans), or dangerously reduced (forest systems, raw materials, life forms, etc.) or more. What has hardly been discussed so far, although it should be the most important topic, is the factor that produces all these problems: Homo sapiens himself, who by his behavior, yes, even just by his sheer numbers, causes all the known ‘problems.’ And this does not happen ‘automatically,’ but because the behavior of Homo sapiens on this planet is ‘controlled’ by his ‘inner states’ in such a way that he seems ‘incapable’ of changing his behavior because he does not have his ‘inner states’ under control.

These inner states, roughly considered, consists of needs, different emotions, and collected experiences linked with knowledge. Knowledge provides the ‘images’ of oneself, others, and the world as a Homo sapiens sees it. Needs and emotions can block, ‘bind,’ or change knowledge. The knowledge that is currently available, however, has great power: ultimately, a Homo sapiens can only do what his current knowledge tells him — if he listens to his knowledge and not to ‘others’ who mean something to him due to the life situation.

If one is now interested in a ‘possible future,’ or — even more specifically — in a ‘possible future that is as good as possible for as many people as possible,’ and sustainable, then the challenge arises as to how people in the situation of everyday life, a certain form of the present, can ‘mentally’ surpass this present in such a way that, despite the current present, they can ‘somehow’ think of a piece of ‘possible future.’

‘Generative Artificial Intelligence in the format of chatbots’ (GAI-ChaBo) can help make the (approximate) entirety of past knowledge — albeit only punctually based on questions — accessible, but the ‘knowledge of the past’ provides ‘nothing new’ out of itself and — above all —, the past does not necessarily have those ‘goals’ and ‘values’ that are necessary in the current present to precisely ‘want that possible future’ that will matter.

With this challenge, Homo sapiens collides ‘with himself’ full force, and he will not be able to ‘hide behind GAI-ChaBo.’ A GAI-ChaBo always only delivers what people have previously said and done, albeit in a breadth that an individual could not achieve, but ultimately a GAI-ChaBo only functions like a kind of ‘mirror of the human collective.’ A GAI-ChaBo cannot replace humanity itself. A GAI-ChaBo is the product of collective human intelligence and can make the entirety of this collective intelligence visible in outline (an incredibly great achievement), but no more.

For the next level, Homo sapiens must somehow manage to ‘get a grip on himself’ in a completely different way than before. There are hardly any usable role models in history. What will Homo sapiens do? GAI-ChaBo is an extraordinary success, but it is not the last level. We can be curious…

Automation of Human Tasks: Typology with Examples of ‘Writing Text’, ‘Calculating’, and ‘Planning’

This text is part of the text “Rebooting Humanity”

The German version can be found HERE.

Author No. 1: Gerd Doeben-Henisch
Contact: cagent@cognitiveagent.org

(Start: June 5, 2024, Last updated: June 6, 2024)

Starting Point

In the broader spectrum of human activities, there are three ‘types’ of action patterns that are common in everyday life and crucial for shared communication and coordination: (i) writing texts, (ii) describing (calculating) quantitative relationships, and (iii) planning possible states in an assumed future. All three types have been present since the beginning of documented ‘cultural life’ of humans. The following attempts a rough typology along with known forms of implementation.

Types and their implementation formats

(a) MANUAL: We write texts ‘manually’ using writing tools and surfaces. Similarly, in quantitative matters, there is manual manipulation of objects that represent quantitative relationships. In planning, there is the problem of how to represent a ‘new’ state: if the ‘new’ is ‘already known’, one can revert to ‘images/symbols’ of the known; if it is ‘truly new’, it becomes difficult; there is no ‘automatism of the new’. How do you describe something that has never existed before? Added to this is the — often overlooked — problem that ‘objects of planning’ are usually ‘value and goal dependent’; needs, intentions, expectations, factual necessities can play a role. The latter can be ‘socially standardized’, but given the ‘radical openness of the future’, history has shown that ‘too strong standardizations’ can be a shortcut to failure.

(b) MANUAL SUPPORT: Skipping the phase of ‘partially mechanical’ support and moving on to the early phase of ‘computer support’, there are machines that can be ‘programmed’ using ‘programming languages’ so that both the writing instrument and the surface are represented by the programmed machine, which allows many additional functions (correcting texts, saving, multiple versions, automatic corrections, etc.). However, one still has to write oneself: letter by letter, word for word, etc. In ‘calculating’, writing down is still very laborious, but the ‘calculation’ then takes place partially ‘automatically’. Planning is similar to writing texts: the ‘writing down’ is supported (with all the additional functions), but ‘what’ one writes down is left to the user. Apart from ‘quantitative calculating’, a ‘projection’, a ‘prediction’ is generally not supported. An ‘evaluation’ is also not supported.

(c) LANGUAGE-BASED SUPPORT: The phase of ‘language-based support’ replaces manual input with speaking. For selected areas of texts, this is becoming increasingly successful. For ‘quantitative matters’ (calculating, mathematics, etc.), hardly at all. For planning also only very limited, where it concerns already formulated texts.

(d) ARTIFICIAL INTELLIGENCE ENVIRONMENTS: The Artificial Intelligence (AI) environment is considered here in the context of dialogue formats: The user can ask questions or send commands, and the system responds. The relevant formats of AI here are the so-called ‘generative AIs’ in the chatbot format. Under the condition of ‘existing knowledge’ of humans in the format ‘stored documents/images/…’ and under the condition of ‘dialogue formats’ of humans (also through explicit training), these generative AIs can process questions and orders in ‘formal proximity’ to the known material in the context of a dialogue so that one does not have to intervene oneself. Corrections and changes in detail are possible. Both in ‘text creation’ and in ‘calculating’, this can function reasonably well within the realm of the ‘known’. However, the actual accuracy in the ‘real world’ is never guaranteed. In ‘planning’, the specific problem remains that for the AI, ‘truly new’ is only limited possible within the framework of combinatorial matters. The truth reservation remains, but also applies in the ‘manual case’, where the human plans themselves. The evaluation problem is also limited to already known evaluation patterns: the future is not the same as the past; the future is more or less ‘different’. Where should the necessary evaluations come from?

An interesting question remains, in what sense the advancing support by generative AIs actually supports communication and coordination among people, and specifically, whether and to what extent the central challenge of ‘future planning’ can be additionally supported by it. The fact is that we humans struggle with future planning in the areas of social life, community life, and larger contexts such as counties, states, etc., ranging from difficult to very difficult. But, for a sustainable future, successful planning seems to be indispensable.