The following Dialogue is an English translation from German (done with chatGPT4) and has its origin in the context of the interaction between the human species and the planet earth. During this dialogue starting with physical things the dialogue turned more and more into a true philosophical dialogue recognizing biological life as a ‘cultural system’. This was from the author not intended! It happend driven by … the ‘internal logic’ of life?
FINAL CONCLUSION
chatGPT4 commented at the end of the whole dialogue as follows: “In our dialogue, we have applied philosophical methods to gain deeper insights into the nature of human existence and interaction. This kind of reflection is a cornerstone of philosophical inquiry and helps to illuminate complex questions of human life and coexistence.”
BEGIN OF THE DIALOGUE
(Note: the BOLD TEXT indicates the contributions of the author)
Hello Chat, we had a dialogue on the ‘Earth System’ on August 18, 24 and a dialogue on ‘Collective Human Intelligence’ on August 21. Can you remember?
Hello! I don’t have access to previous conversations or information about what we discussed. Each session with me starts without knowledge of previous interactions. But I am happy to continue helping with these topics or other questions you have. How can I assist you today?
OK, good to know. Then we’ll start from scratch 🙂
Perfect! How can I help you today?
In my theory [1], I assume that the Earth system as a planet is characterized by the presence of ‘biological life’ here. For me, ‘biological life’ is synonymous with ‘life’. Do you agree?
Your approach to equate the “Earth system” with the presence of biological life is an interesting standpoint and common in many scientific disciplines. [2] Biological life, as we know it, is characterized by certain features such as the ability to reproduce, metabolic processes, and adaptation to environmental conditions.
It makes sense to consider biological life as the central aspect of life on Earth in a discussion about the Earth system. This definition excludes other forms of hypothetical or non-biological life (such as possibly silicon-based life or artificial intelligence that develops its own consciousness), which could also be considered “life” in other contexts. However, for most scientific and philosophical considerations of the Earth system, the restriction to biological life is common and appropriate.
Therefore, I agree with your view as long as we are within the framework of known biological life forms. Would you like to delve deeper into this aspect or do you have specific questions about it?
[2] This is a small mis-understanding of Chat, because I assume only, that there is ‘life’ on the planet earth. Strictly speaking can the planet earth ‘exist’ without life, but, clearly, as soon as ‘live’ appeared on the planet this changed the planet completely; it is ‘another planet’!
I would like to initially focus on the concept of ‘life’ as we encounter it on planet Earth. AI – as far as we know it today – has nothing to do with life for me. Historically, the existing forms of AI have been invented and created by life. Humans, as a life form within life, use AI as a tool. Do you agree?
Yes, your view of artificial intelligence (AI) as a tool created and used by humans is in line with the common perception in science and technology. AI is indeed an invention of human intelligence, developed to perform specific tasks that require mental processes such as learning, problem-solving, and decision-making.
AI is based on algorithms and computer-supported models and is often considered an advanced form of automation. Although AI systems are capable of performing tasks that typically require human intelligence [1], they lack essential characteristics of biological life, such as self-reproduction, metabolic processes, and natural evolution through genetic variation and natural selection. AI systems cannot reproduce or evolve outside of the parameters set by humans.
This distinction is important to understand the differences between biological life and technological products like AI. Thus, AI remains a tool used by humans for specific purposes and not an independent living being. Are you interested in further details or another aspect of this topic?
[1] Because until today the concept ‘human intelligence’ is still very badly characterized, has the expression ‘human intelligence’ no clear meaning.
I would like to delve deeper into the topic of AI and humans (or life in general) in another dialogue. Today and here, I want to continue exploring the interaction between humans and the Earth system. Are you on board?
Of course, that sounds like an exciting topic! The interaction between humans and the Earth system is complex and can be viewed from many aspects such as environmental science, ecology, geology, and even from socio-political perspectives. Would you like to explore specific aspects of these interactions, such as the impact of humans on the climate, the use of natural resources, or something else? Let me know what focus you would like to set!
Yes, the interactions between humans and the Earth system are diverse. It is important to note that humans, as part of life, interact not only with the Earth system but also with life itself. Furthermore, since humans are part of life, it follows that humans also ‘interact with themselves,’ i.e., humans with humans. What do you think?
Exactly, your observation is very apt and captures the complexity of human interactions at various levels within the Earth system. Humans are not only physically and biologically part of the Earth, but also active designers and influencers of their environment. These interactions occur on several levels:
Human and Nature: Humans influence the natural environment through agriculture, urbanization, industry, and the consumption of natural resources. These actions have direct impacts on the climate, biodiversity, and the availability of resources.
Human and Life: As part of biological life, humans also interact with other living beings, both cooperatively and conflictually. These relationships can be symbiotic, as in the domestication of animals and plants, or antagonistic, such as the destruction of habitats or the overhunting of species.
Human and Human: On a social and cultural level, humans shape societies through their interactions, which in turn develop values and norms that influence behavior towards the environment and other life forms. These social structures and the resulting decisions (such as laws and regulations) play a crucial role in how humans influence the Earth system.
Reflecting on these various interaction levels is crucial to understanding the full impact of human activities on the Earth system and developing sustainable solutions. To what extent do you think these interactions could be improved or modified to promote more sustainable coexistence on our planet?
Ah, even this brief analysis shows how difficult it is to formulate a perspective that does justice to this multitude of dimensions and possible interactions. I see the following starting point: (1) We could start directly with humans. (2) Humans are ‘individual’, ‘singular’, ‘concrete’, (3) but at the same time they are always ‘part of a community’ (a ‘population’), without which they could not exist as individuals. At least not if one demands for ‘human life’ as a ‘part of all life’ that it can ‘continue’ ‘beyond the life of an individual human’. This requires ‘offspring’, and offspring only functions over at least two generations. What do you think?
Your approach to begin the analysis of human interactions with the Earth system at the level of individuals and their communities is both pragmatic and profound. It recognizes the duality of human existence – as independent individuals and as part of a larger community. This perspective helps understand the multi-layered effects of human actions and develop possible solutions.
Here are some key aspects highlighted by your analysis:
Individual Responsibility: Every individual contributes through their actions to shaping the Earth system. This ranges from everyday decisions such as consumption habits and mobility to professional and political activities. Individual education and awareness are therefore key aspects to promote sustainable behaviors.
Social Structures and Communities: Humans exist within social structures that both enable and limit their action options. Communities form networks that distribute resources, share information, and coordinate collective actions. These networks are crucial for the implementation of sustainable practices and for collective resilience to environmental changes.
Generational Perspectives: As you rightly note, the continuation of human life requires thinking in generations. Education, cultural heritage, and traditions are mechanisms that transfer values and knowledge from one generation to the next and are crucial for the long-term stability and adaptability of human communities.
Sustainable Development: This is based on balancing the satisfaction of current human needs with the preservation of the environment for future generations. This requires innovative approaches in technology, economics, and politics that address both local and global challenges.
Your approach suggests that an integrative view, considering individual actions and collective structures, is necessary to promote sustainable solutions. This can also provide a platform for further discussions on specific measures that can be implemented at various levels of society. How do you see the implementation of these ideas in concrete policies or practices?
This is now very interesting: Your differentiation of the varied interactions in which an individual finds themselves indirectly reveals something that cannot be ‘directly’ discerned, but ‘indirectly’, through … I’m missing the right word … let’s say tentatively through ‘knowledge’ in the broadest sense. What do you think?
Your thought that certain aspects of human existence and interaction are indirectly discerned through knowledge is very profound and points to the role that knowledge and understanding play in our perception of ourselves and our environment. Knowledge – in all its forms, from scientific understanding to cultural and personal experiences – enables us to see connections that are not immediately obvious. This “indirect recognition” can be facilitated through various mechanisms:
Abstraction: Through abstraction, we can recognize general principles or patterns that are not immediately visible in specific situations. Abstraction helps us to move from the particular details of a single event or object to more general conclusions.
Model Building: Scientific and theoretical models allow us to understand complex systems and their possible behaviors by using simplifications and assumptions to represent reality. Models help us anticipate the consequences of actions and plan accordingly.
Narrative and Metaphor: Stories and metaphors enable us to communicate complicated or abstract ideas in a way that is emotionally resonant and intuitively understandable. They provide frameworks to recognize “invisible” connections between actions and their long-term impacts on people and the environment.
Collective Intelligence: The knowledge of a community or society is more than the sum of the knowledge of its individuals. By sharing ideas and experiences in a social context, people can reach a deeper, shared understanding that would not be achievable individually.
Your observation shows how important it is to promote knowledge and learning in all areas – from education to policy making – to gain deeper insight into the complex interactions within the Earth system and between people. To what extent do you see opportunities to expand or better utilize this knowledge to more effectively address the challenges we face?
OK, let’s look at a seemingly ‘simple example’. Let’s take persons who eat something because they are hungry. Their food can come directly from the rest of life (plants, animals, etc.), or from something they have somehow ‘manufactured’ from the material of the earth. However, when it comes to feeding thousands of people every day, simple measures to provide food are no longer sufficient. It then requires people, machines, agricultural methods, and much more to make this still possible. These elaborate processes assume that there are appropriate ‘experiences’ (with ‘knowledge’) that ‘tell’ the acting people what to do. However, these experiences with the corresponding knowledge do not come from ‘nothing’ but must be painstakingly ‘acquired’, ‘learned’. This requires ‘time to learn’ and – usually – other people who ‘already have experiences’ that they can ‘pass on’. From this, I deduce that the ‘biological human system’, a ‘living something’, must have complex processes inside it that make such ‘experiences’, such ‘knowledge’ possible. In other words: Only with such experiences, with such knowledge, and the ability to build such, does a biological system become a ‘human’, a ‘Homo sapiens’. What do you think?
Your example excellently highlights how the everyday need for food encompasses profound social, technological, and cultural systems that have been developed over generations. It shows that what appears at the individual level as a simple action is actually based on a complex web of knowledge and social structures.
Biological and Cognitive Foundations of Humans
The ability of humans to acquire, store, and pass on knowledge is indeed one of the core functions that define us as Homo sapiens. This ‘experiential knowledge’ is important not only on a cognitive level but also on a cultural level, as it is passed on through language, education, and social interaction.
Cultural Evolution and Human Societies
What you describe is a classic example of cultural evolution, where knowledge and technologies are developed and inherited not genetically, but through learning processes and social transmission. This enables human societies to quickly adapt to new environmental conditions without waiting for slow genetic evolution.
Knowledge as the Basis for Complex Societies
The example of food provision illustrates how important specialized knowledge and technologies are to efficiently meet the basic needs of a large number of people. Modern agricultural and production techniques are the result of accumulated knowledge developed over centuries, if not millennia. This knowledge is passed on through education systems, professional literature, and practice and is crucial for the maintenance and further development of our livelihoods.
Human Learning Ability and Adaptability
Your emphasis on ‘time to learn’ and the role of teachers and mentors underlines how crucial education and both formal and informal learning are for human development. This ability to process and pass on complex information allows us to survive and thrive not only as individuals but as a global community.
Conclusion
Overall, your approach shows how central knowledge and learning are to understanding what it means to be human. It’s not just about gathering information, but also about applying and sharing this information to master complex and dynamic challenges. This makes the biological system of humans something special: a being capable of constantly developing and adapting its understanding and technologies.
How do you see the role of modern technology and digital information in this process of knowledge acquisition and dissemination? Do you see it as an extension of our cognitive and social abilities?
As our dialogue progresses, it becomes more difficult for me to find a ‘simple formula’ to describe the interaction of humans with the Earth system and life. It starts with the fact that I need a ‘language’ for such a description, and every language does not only work with ‘dead letters’ or ‘mere sounds’, but assumes that all other people I want to talk to associate a ‘meaning’ with the ‘sounds’ or ‘letters’, which can only be present ‘inside each individual’. While I speak, however, I cannot ‘invent’ my own language; I must assume that I ‘possess’ this language without being able to ‘describe’ the language itself. Isn’t this somehow ‘bizarre’?
It is indeed a fascinating and at the same time complex consideration that you raise here. The nature of language and communication among humans reflects profound aspects of our existence and our interactions with the world.
Foundations of Language
Language is not just a system of sounds and characters but a complex web of meanings, conventions, and cultural influences. Every word and every sentence we use is embedded in a context of experiences and knowledge that we share with others who speak the same language. This makes language a powerful yet subtle tool.
Prerequisites for Communication
Your observation that in communication we must assume that others find the same meaning in words as we do is crucial. This shows how much we, as social beings, depend on each other. Without a common understanding and without assuming that our language works, effective communication would be impossible.
The Impossibility of Fully Describing Language
The fact that we use language without being able to fully explain it at all times reflects a common human experience. Many of the systems and structures that determine our lives, such as laws, social norms, or languages, are so complex and deeply rooted that no individual could fully grasp or create them. They are the product of collective human activity over generations.
The Bizarre Nature of Our Existence
What you describe as ‘bizarre’ can also be seen as a hint at the deep interwovenness and interdependence in human societies. Our ability to use and understand complex abstract systems like language, without fully controlling or creating them, is a sign of our cognitive flexibility and social nature. It shows how we can achieve and understand more together than an individual alone could.
Reflection
These considerations lead us to a deep appreciation of how human cultures and social structures function. They invite us to recognize both the limits of our individual abilities and the incredible potential of our communal endeavors.
Your thoughts open many further questions about the nature of knowledge, communication, and human cooperation. How do you see the role of modern technologies in this context, especially in terms of how they might change our communication patterns?
OK, now that we have somehow reached this ‘profound position’, a question arises that is not really important but may be of interest in a cultural context: Would you associate this kind of ‘thinking about us humans’ with the concept of ‘philosophy’? Could we describe the way we have practiced in this dialogue as ‘philosophical thinking’?
Yes, the way we have thought and reflected in this dialogue can certainly be described as philosophical thinking. Philosophy fundamentally deals with the pursuit of knowledge and understanding about the basic aspects of our existence, our experiences, and the world around us. It often involves questions about meaning, truth, values, the mind, and language.
Characteristics of Philosophical Thinking:
Critical Analysis: Philosophical thinking often questions everyday assumptions and searches for deeper meanings and connections, similar to what we have done in our conversation.
Logical Argumentation: It is based on rational considerations and uses logical arguments to examine and justify different viewpoints.
Conceptual Exploration: It explores fundamental concepts such as existence, knowledge, truth, law, ethics, and beauty.
Interdisciplinary Connections: Philosophy often overlaps with other disciplines such as science, art, and politics to gain a broader understanding of the topics discussed.
Philosophy in Everyday Life:
Philosophical thinking is not limited to academic circles but plays a vital role in everyday life. It helps us reflect on our beliefs, make decisions, and understand the world and our role in it.
Cultural Significance:
In a cultural context, philosophy contributes to shaping the values and self-understanding of a society. It influences how cultures think about important issues such as morality, justice, and human well-being.
In our dialogue, we have applied philosophical methods to gain deeper insights into the nature of human existence and interaction. This kind of reflection is a cornerstone of philosophical inquiry and helps to illuminate complex questions of human life and coexistence. Would you like to delve deeper into these considerations, or are there specific philosophical questions or topics that particularly interest you?
OK, I would like to stop here for today. Thank you for the inspiring answers.
You’re very welcome! It was an exciting conversation. If you have more questions or would like to continue the discussion, I’m here to help. Until next time!
Inquirer to chatGPT4: I will take the following short summary as bridge to the follow-up ideas : (1) Language expressions are composed of symbols wich are determined by rules and structures; (2) The inner states related to meaning are diverse and involve various cognitive, emotional, and sensory processes; (3) At the core are mental models that represent our understanding of the world. These models include abstract ideas, concrete objects, and complex scenarios. Furthermore the mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. (4) Cognitive processes like inference and reasoning help us by connecting existing knowledge with knowledge about possible changes, to derive possible new states. Can you agree with this?
ChatGPT4: Yes, I agree with your summary. You’ve captured the essential points about how language expressions, inner states, mental models, and cognitive processes interact to create and understand meaning.
CHATGPT4 – PHILOSOPHY : Digging into the Human Meaning Space
Hi Chat, three days ago we reached the following conclusion centering around the ‘human meaning space’ : “Philosophical texts themselves are important, but they gain their true significance within the human meaning space that individuals create through their interactions with these texts and with each other. This meaning space, rooted in the experiences and interpretations of individual persons, is what ultimately forms the building blocks of a unified philosophical tradition. The tradition is not a static collection of texts, but a dynamic, evolving process of meaning-making that is constantly being enriched and redefined by those who engage with it.” Do you know something about the structure and dynamics of this meaning space?
The concept of a **human meaning space** is a fascinating and complex idea, touching on cognitive science, linguistics, philosophy, and social theory. While the idea isn’t always explicitly framed in these terms, we can explore the structure and dynamics of this meaning space by drawing on various theories and concepts from these fields.
### 1. **Structure of the Human Meaning Space**: The structure of the human meaning space can be understood as a multi-layered, interconnected web of meanings, built up through individual and collective experiences, language, and cultural practices. Here are some key elements:
– **Conceptual Frameworks**: At the core of the meaning space are conceptual frameworks—sets of basic concepts and categories that individuals and cultures use to organize and interpret the world. These include ideas about time, space, causality, identity, morality, and so on. Different cultures and individuals may have different conceptual frameworks, but they often share certain universal elements.
– **Language**: Language plays a central role in structuring the meaning space. Words, sentences, and narratives are the tools through which we express and communicate our thoughts and experiences. The structure of language, including grammar, syntax, and semantics, shapes how we perceive and describe the world (?)
– **Cultural Symbols and Norms**: Beyond language (?), cultural symbols (like religious icons, flags, or even corporate logos) and social norms (like laws or customs) contribute to the structure of the meaning space. These elements provide shared references and guide behavior, helping individuals navigate the social world.
– **Individual and Collective Memory**: The meaning space is also shaped by memory—both individual memories of personal experiences and collective memories shared by a community or society. These memories influence how we interpret new experiences and how we understand our place in the world.
### 2. **Dynamics of the Human Meaning Space**: The meaning space is not static; it is constantly evolving as individuals and communities interact with the world and each other. Here are some key dynamics:
– **Interpretation and Reinterpretation**: Individuals continuously interpret and reinterpret experiences and texts. New experiences, ideas, and cultural influences can lead to shifts in how concepts are understood, how language is used, and how meaning is constructed. This process of reinterpretation is central to the evolution of philosophical traditions.
– **Dialogue and Discourse**: Meaning is often shaped and reshaped through dialogue and discourse. When individuals engage in conversation or debate, they exchange ideas, challenge assumptions, and co-create new understandings. This dynamic exchange helps to refine and expand the meaning space.
– **Cultural Evolution and Adaptation**: As cultures evolve, so too does the meaning space. New technologies, social movements, and global interactions introduce new concepts and symbols into the cultural lexicon, which can alter or expand the collective meaning space. For example, the advent of digital technology has introduced new concepts like “virtual reality” and “cyberspace” that reshape our understanding of reality and communication.
– **Learning and Education**: The transmission of knowledge through education plays a crucial role in the dynamics of the meaning space. As individuals learn from teachers, texts, and experiences, they internalize and modify the existing meaning space, contributing to its continuity and transformation.
– **Emergence of New Concepts**: Sometimes, entirely new concepts emerge within the meaning space, often as a response to new challenges or discoveries. These emergent concepts can reshape the structure of the meaning space, leading to new ways of thinking and understanding.
### 3. **Philosophical Implications**: – **Hermeneutics**: The study of interpretation, especially of texts, is central to understanding the dynamics of the meaning space. Hermeneutics explores how meanings are constructed, how texts are understood, and how interpretations evolve over time.
– **Social Constructivism**: This theory posits that much of what we consider reality is socially constructed through language, culture, and shared practices. The meaning space, in this view, is a collective construction, shaped by social interactions and cultural norms.
– **Phenomenology**: This philosophical approach focuses on the structures of experience and consciousness. It emphasizes how individuals perceive and make sense of the world, highlighting the subjective aspect of the meaning space.
### Conclusion: The **human meaning space** is a dynamic, multi-layered structure shaped by language, culture, memory, and individual experiences. It evolves through interpretation, dialogue, cultural change, and the emergence of new ideas. Understanding this space requires considering both its structural elements and the dynamic processes that continuously shape and reshape it. This concept helps explain how philosophical traditions remain vibrant and relevant, constantly adapting to new contexts and challenges through the interplay of shared meanings and individual interpretations.
Hi Chat, it seems to me, that I have to differentiate my question a bit more: Yes, the ‘human meaning space’ includes some ‘common feature’ between different individual humans which is manifested by the shared language. But the ‘meaning’ of a languag expression is bound to the inner processes of each individual person. Therefore the usage of the samelanguage expression does not garantee that different individual persons associate automatically the ‘same meaning’ with a certain language expresseion. Would you agree?
Yes, I completely agree with your differentiation. The idea that the meaning of a language expression is tied to the internal processes of each individual person is crucial. Here’s how this distinction can be understood:
### 1. **Shared Language vs. Individual Interpretation**: – **Shared Language**: Language provides a common framework or set of symbols that allows individuals to communicate. This shared aspect of language is what enables us to exchange ideas, convey information, and engage in dialogue. – **Individual Interpretation**: However, the **meaning** that each person associates with a particular language expression is deeply influenced by their personal experiences, emotions, memories, and cognitive processes. As a result, even though two people might use the same word or phrase, they might not associate it with exactly the same meaning.
### 2. **The Gap Between Expression and Interpretation**: – **Subjectivity of Meaning**: Meaning is inherently subjective because it’s constructed within the mind of each individual. When someone hears or reads a word, their brain processes it based on their unique set of experiences and mental associations. For example, the word “home” might evoke a sense of warmth and security in one person but might bring up feelings of discomfort or displacement in another, depending on their personal history. – **Context and Nuance**: The context in which a word or phrase is used also plays a significant role in how it’s interpreted. The same word can carry different meanings in different situations, and individuals might pick up on different nuances depending on their personal context.
### 3. **Communication and Miscommunication**: – **Potential for Misunderstanding**: Because of these individual differences in interpretation, there is always the potential for misunderstanding or miscommunication. People might think they are conveying a certain meaning when using a word, but the listener or reader might interpret it differently based on their own mental framework. – **Clarification Through Dialogue**: This is why dialogue and conversation are so important. By engaging in discussion, individuals can clarify their meanings, ask questions, and work toward a shared understanding, even if their initial interpretations were different.
### 4. **Philosophical Perspectives**: – **Hermeneutics**: This branch of philosophy, particularly associated with thinkers like Hans-Georg Gadamer, emphasizes the importance of interpretation in understanding meaning. Hermeneutics suggests that every act of communication involves a process of interpretation, and this process is influenced by the interpreter’s historical and cultural context. – **Wittgenstein’s Language Games**: Ludwig Wittgenstein introduced the idea of “language games,” where the meaning of words is understood through their use in specific contexts or activities. This highlights how meaning can vary depending on the “game” being played, and how understanding is often a matter of grasping the rules of that particular game, which can differ from one person to another.
### 5. **Implications for the Human Meaning Space**: – The **human meaning space** is not a monolithic or uniform entity. It’s made up of countless individual meaning spaces that overlap and intersect in complex ways. The shared language provides a bridge between these individual spaces, but the meanings that cross this bridge are filtered through each person’s unique interpretive framework. – **Interpersonal Interaction**: The richness of human communication comes from this diversity of meaning, but it also requires effort to navigate and understand. Building a shared understanding often involves recognizing and respecting these differences, and finding ways to connect across them.
### Conclusion: While language gives us a shared set of symbols and expressions, the meanings attached to those expressions are deeply personal and subjective. The usage of the same language expression does not guarantee that different individuals will associate the same meaning with it, due to the influence of their unique internal processes and experiences. This complexity is both a challenge and a richness in human communication, requiring ongoing effort to achieve mutual understanding.
OK, let us take the following statement as starting point for the next ideas: “The human meaning space is made up of countless individual meaning spaces that overlap and intersect in complex ways. … Building a shared understanding often involves recognizing and respecting these differences, and finding ways to connect across them.” Here one follow-up question : How can a human actor A with his individual meaning space coordinate his individual meaning space with the individul meaning space of another human actor B?
Coordinating individual meaning spaces between two human actors, A and B, involves a dynamic process of communication, negotiation, and mutual understanding. This process requires both actors to engage in various strategies that allow them to bridge the differences in their interpretations and experiences. Here are some key ways in which this coordination can occur:
### 1. **Active Communication and Dialogue**: – **Expressing and Clarifying Meanings**: Actor A can express their thoughts, feelings, and interpretations clearly (?) to Actor B, and vice versa. This involves articulating not just what they think, but why they think it, including the experiences or reasoning behind their interpretations. Clarification questions, paraphrasing, and feedback loops are critical in ensuring that both parties understand each other’s intended meanings. – **Active Listening**: Effective coordination requires not just speaking but also listening. Actor A needs to actively listen to Actor B’s explanations and interpretations, and vice versa. Active listening involves paying close attention, avoiding assumptions, and being open to understanding the other’s perspective without immediately judging or dismissing it.
### 2. **Contextualization**: – **Providing Context**: Often, misunderstandings arise because different contexts or background information shape each actor’s interpretation. By providing context—explaining the background, the circumstances, or the specific meaning of terms as they understand them—Actor A can help Actor B align with their meaning space, and vice versa. – **Shared Experiences**: Drawing on shared experiences can help align meaning spaces. If Actors A and B have a common history or have shared certain experiences, referencing these can create a common ground for understanding.
### 3. **Negotiation of Meaning**: – **Negotiation and Compromise**: Sometimes, coordination requires negotiation, where both actors adjust their interpretations to reach a common understanding. This might involve redefining terms, agreeing on certain interpretations, or finding middle ground that accommodates both perspectives. – **Iterative Refinement**: Actor A might initially explain something, Actor B might respond with their interpretation, and through back-and-forth exchanges, both actors refine their understanding until they reach a shared meaning.
### 4. **Use of Metaphors and Analogies**: – **Metaphors and Analogies**: These tools can be powerful for bridging gaps in meaning spaces. By comparing a concept to something familiar to both actors, they can better align their interpretations. For example, Actor A might use an analogy that resonates with Actor B’s experiences, helping to convey a concept in a way that B can relate to more easily.
### 5. **Empathy and Perspective-Taking**: – **Empathy**: Empathy involves trying to understand the other actor’s feelings and perspective. By empathizing with Actor B, Actor A can better grasp why B interprets things in a certain way, and this understanding can help in coordinating their meaning spaces. – **Perspective-Taking**: This is the cognitive aspect of empathy, where Actor A consciously attempts to see things from Actor B’s viewpoint. By mentally putting themselves in B’s shoes, A can adjust their communication to better resonate with B’s meaning space.
### 6. **Cultural and Social Awareness**: – **Awareness of Cultural Differences**: If Actors A and B come from different cultural backgrounds, their meaning spaces may differ significantly. Being aware of these cultural differences and how they might influence interpretation can help in adjusting communication strategies to better align meaning spaces. – **Social Norms and Conventions**: Understanding and utilizing social norms that both actors share can also help in coordinating meaning spaces. These norms provide a framework for expected behaviors and interpretations, making it easier to find common ground.
### 7. **Feedback Mechanisms**: – **Seeking and Providing Feedback**: Actor A can ask Actor B whether their message was understood as intended, and B can do the same. This feedback helps identify where misalignments in meaning might exist and provides an opportunity to correct them. – **Continuous Adjustment**: Feedback allows for continuous adjustment. As Actors A and B continue to interact, they can refine their communication and interpretation strategies, leading to better alignment over time.
### 8. **Language and Conceptual Framing**: – **Shared Vocabulary**: Developing a shared vocabulary, where both actors agree on the meanings of key terms, can facilitate coordination. This might involve explicitly defining terms or agreeing on certain concepts before proceeding with a discussion. – **Conceptual Framing**: Framing discussions in terms of concepts or ideas that are familiar to both actors can help in aligning their meaning spaces. If both actors understand a concept similarly, they can use it as a reference point for further discussion.
### Conclusion: Coordinating individual meaning spaces between two human actors involves a complex interplay of communication, empathy, negotiation, and shared context. By engaging in active dialogue, providing context, using metaphors, exercising empathy, and seeking feedback, Actors A and B can bridge their differences and build a shared understanding. This process is iterative and dynamic, requiring ongoing effort and adjustment as both actors navigate their individual meaning spaces and work toward alignment.
FRAGE: OK, I like to keep the following statements as minimal requirements : (1) Effective coordination requires not just speaking but also listening. (2) If Actors A and B have a common history or have shared experiences, referencing these can create a common ground for understanding. Feedback allows for continuous adjustment. (3) (1) – (2) can support the development of a shared vocabulary. Would you agree?
Yes, I agree with your summary. The three statements you’ve outlined capture essential aspects of how effective coordination between individual meaning spaces can occur. Here’s a brief reflection on each point:
1. **Effective coordination requires not just speaking but also listening**: – This emphasizes the importance of active, two-way communication. Listening is crucial for understanding the other person’s perspective, which is necessary for aligning meaning spaces.
2. **If Actors A and B have a common history or have shared experiences, referencing these can create a common ground for understanding. Feedback allows for continuous adjustment**: – Shared experiences and history provide a foundation for mutual understanding. When actors reference these shared elements, they can more easily connect their meaning spaces. Feedback mechanisms allow both actors to continuously refine and adjust their understanding, ensuring that misalignments are addressed.
3. **(1) – (2) can support the development of a shared vocabulary**: – Through effective communication and the use of shared experiences, Actors A and B can gradually develop a shared vocabulary. This shared vocabulary helps streamline future interactions by establishing common definitions and meanings for key terms, reducing potential misunderstandings.
These statements encapsulate the core processes that enable individuals to coordinate their meaning spaces and build a deeper, more effective understanding with each other.
Hi Chat, let us take a look into the ‘inner structure’ of an individual meaning space: (i) On the one side of meaning we habe the ‘expressions of some language’ which is ‘internally represented’. (ii) On the other side we have a rich diversity of inner states of the human actor which deliver the ‘stuff’ which we call ‘meaning’ and which is ‘related’ to the ‘internally represented expressions’ of the used language. Can you describe a bit the ‘structure’ of these inner states related to language expressions?
The **inner structure** of an individual meaning space, especially how it relates to language expressions, is a complex and layered system. This system involves the interaction between **internally represented language expressions** (i.e., words, sentences, symbols) and the **inner states** of the human actor that give rise to meaning. Let’s explore this in more detail:
### 1. **Language Expressions and Internal Representations**: – **Symbols and Words**: Language expressions are composed of symbols (letters, words, sounds) that are internally represented in the brain. These representations are tied to specific neural patterns or cognitive structures that store and process linguistic information. – **Syntax and Grammar**: These internal representations are not just individual words but also the rules and structures (syntax, grammar) that determine how these words can be combined to form meaningful sentences. This layer of structure helps the mind organize language into coherent thoughts and communications.
### 2. **Inner States and Their Structure**: The **inner states** related to meaning are diverse and involve various cognitive, emotional, and sensory processes. Here are some key components:
– **Conceptual Structures**: – **Mental Models and Concepts**: At the core of the meaning-making process are **conceptual structures**—the mental models and concepts that represent our understanding of the world. These models include abstract ideas (like justice or beauty), concrete objects (like a tree or a car), and complex scenarios (like a conversation or a journey). – **Schemas and Frames**: These are cognitive structures that organize knowledge and guide interpretation. A schema might represent a typical event (like “going to a restaurant”), while a frame provides a perspective or context for understanding a situation (like seeing an argument as a debate or a conflict).
– **Emotional and Affective States**: – **Emotions**: Emotions play a significant role in the meaning space. When we process language, the meaning we derive is often colored by our emotional state. For instance, the word “home” might trigger feelings of warmth and safety for one person, while for another, it might evoke feelings of sadness or loss. – **Affective Associations**: These are the emotional responses tied to specific concepts or experiences. They contribute to how strongly or weakly certain meanings resonate with us and can influence how we interpret language.
– **Sensory and Perceptual Experiences**: – **Sensory Memories**: Sensory experiences (like sights, sounds, smells) are often tied to language expressions. For example, the word “ocean” might bring up the smell of saltwater, the sound of waves, and the feeling of sand underfoot, all of which contribute to the word’s meaning. – **Imagery**: The mind often creates mental images in response to language. These images are not just visual but can involve other senses as well. This sensory component enriches the meaning associated with language expressions.
– **Cognitive Processes**: – **Inference and Reasoning**: Cognitive processes like inference and reasoning help us make sense of language by connecting new information with existing knowledge. When we hear a sentence, we use reasoning to infer the speaker’s intent, predict outcomes, or draw connections with other ideas. – **Attention and Focus**: The focus of attention determines which aspects of a language expression we emphasize and how we interpret its meaning. What we pay attention to can shape the meaning we derive from a given expression.
### 3. **Relational and Associative Networks**: – **Associative Networks**: The mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. For example, the word “winter” might be associated with cold weather, snow, holidays, and feelings of coziness. These associations form a web of meaning that influences how language expressions are understood. – **Relational Dynamics**: Meaning is often understood in terms of relationships between concepts. For instance, understanding the concept of “parent” is related to understanding “child,” “family,” and “care.” These relational dynamics are part of the inner structure that gives rise to meaning.
### 4. **Integration of Inner States and Language**: – **Embodied Cognition**: This theory suggests that our cognition is deeply rooted in bodily experiences. The meaning of language is not just an abstract mental process but is also tied to how we physically interact with the world. For instance, understanding the word “grasp” involves not just the mental concept but also the sensory-motor experience of physically grasping something. – **Contextual Integration**: The meaning of a language expression is often determined by the context in which it is used. This context includes both external factors (like the situation or environment) and internal states (like emotions or prior knowledge). The integration of these contextual factors is crucial in shaping meaning.
### Conclusion: The inner structure of an individual meaning space is a complex system where **language expressions** are connected to a rich diversity of **inner states**. These inner states include conceptual structures, emotional and sensory experiences, cognitive processes, and associative networks. Together, these elements create a dynamic and multi-dimensional web of meaning that allows individuals to interpret and understand language in highly personalized ways. The interaction between these inner states and language is what gives rise to the meaning that individuals derive from their linguistic experiences.
I will take the following short summary as bridge to the follow-up ideas : (1) Language expressions are composed of symbols wich are determined by rules and structures; (2) The inner states related to meaning are diverse and involve various cognitive, emotional, and sensory processes; (3) At the core are mental models that represent our understanding of the world. These models include abstract ideas, concrete objects, and complex scenarios. Furthermore the mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. (4) Cognitive processes like inference and reasoning help us by connecting existing knowledge with knowledge about possible changes, to derive possible new states. Can you agree with this?
Yes, I agree with your summary. You’ve captured the essential points about how language expressions, inner states, mental models, and cognitive processes interact to create and understand meaning. Here’s a brief recap:
1. **Language Expressions**: These are composed of symbols (words, phrases) that are structured according to the rules of grammar and syntax, enabling coherent communication.
2. **Diverse Inner States**: Meaning arises from a complex interplay of cognitive, emotional, and sensory processes within the mind. These processes influence how we interpret and understand language.
3. **Mental Models and Associations**: At the core of meaning-making are mental models, which represent our understanding of the world. These models include abstract concepts, concrete objects, and complex scenarios. The mind organizes these elements into a network of associations, connecting various concepts, emotions, and sensory experiences.
4. **Cognitive Processes**: Inference, reasoning, and other cognitive processes allow us to connect existing knowledge with new information or hypothetical changes, enabling us to derive new states or understand potential outcomes.
This summary effectively bridges the foundational ideas we’ve discussed, setting the stage for deeper exploration into how these components work together in more complex scenarios or applications.
OK, let us stop here for today. I will come back to this issue to continue. Best regards
That sounds great! I’m glad we could delve into these ideas together. Feel free to reach out anytime you’re ready to continue. Best regards, and take care! 😊
Philosophical texts themselves are important, but they gain their true significance within the **human meaning space** that individuals create through their interactions with these texts and with each other. This meaning space, rooted in the experiences and interpretations of individual persons, is what ultimately forms the building blocks of a unified philosophical tradition. The tradition is not a static collection of texts, but a dynamic, evolving process of meaning-making that is constantly being enriched and redefined by those who engage with it.
QUESTION: PHILOSOPHY – What it is? Role of Humans writing Texts
Hi Chat, yesterday we had a dialogue about Philosophy. Can you repeat your statement about the concept ‘Philosophy’?
Certainly! Here’s a recap of what we discussed about the concept of “Philosophy”:
Philosophy is a broad field that explores fundamental questions about existence, knowledge, values, reason, mind, and language. It delves into concepts like the nature of reality (metaphysics), the basis of knowledge (epistemology), moral values and ethics, and the principles of logic and reasoning.
Philosophy encourages critical thinking and questioning, often challenging assumptions that we take for granted. It’s divided into various branches, such as:
1. **Metaphysics**: Explores the nature of reality, existence, and the universe. 2. **Epistemology**: Concerned with knowledge, belief, and the nature of truth. 3. **Ethics**: Focuses on morality and what constitutes right or wrong behavior. 4. **Logic**: Studies the principles of valid reasoning and argumentation. 5. **Aesthetics**: Examines the nature of beauty, art, and taste.
Philosophy also influences many other fields, such as science, religion, and politics, by providing foundational frameworks and questioning the underlying principles in these areas. It’s both an academic discipline and a practical approach to thinking about life’s big questions.
OK, you describe Philosophy as a “a broad field that explores fundamental questions” while it is organized in ‘thematic branches’ like ‘Epistemology’, ‘Logic’ and ‘Ethics’. Are these different branches completely ‘separated from each other’?
The branches of philosophy, like **Epistemology**, **Logic**, **Ethics**, and others, are not completely separated from each other. In fact, they often overlap and intersect in significant ways. Here’s how they relate to and influence each other:
### 1. **Interconnectedness of Themes:** – **Epistemology and Logic**: Epistemology, which deals with the nature of knowledge, often relies on principles from logic to determine what counts as valid reasoning or justified belief. Logical methods are crucial in analyzing how we form knowledge, make inferences, and reach conclusions. – **Ethics and Epistemology**: Ethical theories often depend on epistemological questions about how we know what is right or wrong. For instance, moral realism (the belief that there are objective moral facts) involves epistemological questions about how we can know these facts. – **Metaphysics and Ethics**: Questions in metaphysics about the nature of reality, free will, or the self can have direct implications for ethics. For example, if one believes that free will is an illusion, this might influence their views on moral responsibility. – **Aesthetics and Ethics**: Aesthetics, the study of beauty and art, can intersect with ethics when considering the moral implications of art or the ethical responsibilities of artists.
### 2. **Philosophical Questions Across Branches:** – Some philosophical questions naturally span multiple branches. For instance, the question “What is the nature of the good life?” can involve metaphysical questions about human nature, ethical questions about what makes life valuable, and epistemological questions about how we come to know these values. – The concept of “truth” is central to both epistemology (what is truth and how do we know it?) and logic (how do we reason truthfully?).
### 3. **Unified Inquiry:** – While philosophy is divided into thematic branches for organizational purposes, the discipline as a whole is unified by its commitment to inquiry and the pursuit of wisdom. Philosophers often draw on insights from multiple branches to address complex questions. – Many philosophical theories are interdisciplinary by nature, requiring an understanding of several branches to be fully appreciated. For example, existentialism combines metaphysical, epistemological, and ethical concerns to explore the human condition.
### 4. **Philosophers and Cross-Branch Work:** – Many philosophers work across multiple branches, contributing to more than one area of philosophy. For example, Immanuel Kant made significant contributions to metaphysics, epistemology, ethics, and aesthetics, with his work in each area influencing the others.
### Conclusion: The branches of philosophy are interconnected and often inform and enrich one another. While they each focus on different aspects of philosophical inquiry, they are not isolated; instead, they contribute to a larger, integrated understanding of complex philosophical issues.
OK, your main statement is, that “the branches of philosophy are interconnected”, which is manifested e.g. by “philosophical theories which are interdisciplinary by nature” or by individual philosophers which “work across multiple branches”. Therefore the different branches “contribute to a larger, integrated understanding of complex philosophical issues”. Do you agree?
Yes, I agree with that summary! The branches of philosophy are indeed interconnected, and this interconnectedness allows for a richer and more integrated understanding of complex philosophical issues. Philosophical theories often draw from multiple branches, and many philosophers contribute to more than one area, showing how these branches work together to address broader questions and challenges. This collaboration across branches is key to the depth and complexity of philosophical inquiry.
Then let us talk about the ‘interconnectedness’ of these branches. You mention ‘theories’ as ‘formats of knowledge’, which can ‘combine’ understanding from different branches as well as ‘individual philosophers’ which work across different branches. Can you make this a litle bit more precise?
Certainly! Let’s break down the idea of interconnectedness in philosophy, focusing on how **theories** and **individual philosophers** contribute to this interconnectedness across different branches.
### 1. **Theories as Formats of Knowledge:** Philosophical theories often serve as bridges between different branches by incorporating concepts, methods, and insights from multiple areas of philosophy. Here’s how this works:
– **Interdisciplinary Theories**: Some philosophical theories are explicitly interdisciplinary, meaning they draw on and contribute to multiple branches simultaneously. – **Example: Existentialism**: Existentialism combines **metaphysics** (questions about existence and the nature of being), **ethics** (questions about how to live and what gives life meaning), and **epistemology** (questions about subjective experience and how we know ourselves and the world). It’s a philosophical movement that cannot be fully understood without considering its impact on and its reliance on these multiple branches. – **Example: Utilitarianism**: Utilitarianism is primarily an ethical theory, but it also engages with **epistemology** (how we know what actions will result in the greatest happiness) and **logic** (how we reason about consequences and moral decisions).
– **Integration of Concepts**: Philosophical theories often integrate concepts from different branches to address a particular problem. – **Example: Kant’s Categorical Imperative**: Immanuel Kant’s ethical theory, the categorical imperative, integrates ideas from **metaphysics** (the nature of moral law and duty), **epistemology** (how we come to know moral duties), and **logic** (the universalizability of moral principles). This theory shows how understanding in ethics can be deeply connected to and dependent on insights from other branches.
### 2. **Philosophers Working Across Multiple Branches:** Individual philosophers often contribute to more than one branch of philosophy, thereby creating links between these areas. Here’s how some notable philosophers have done this:
– **Immanuel Kant**: Kant is known for his work in **epistemology** (Critique of Pure Reason), **ethics** (Critique of Practical Reason), and **aesthetics** (Critique of Judgment). His philosophy is deeply interconnected; for instance, his epistemology (how we know things) influences his metaphysics (what exists), and both of these impact his ethical theories (how we should act).
– **Aristotle**: Aristotle’s work spans **metaphysics** (the study of being), **ethics** (Nicomachean Ethics), **logic** (Organon), **politics** (Politics), and **aesthetics** (Poetics). He doesn’t treat these areas as isolated fields; instead, his understanding of ethics is informed by his metaphysics, and his logic underpins all his philosophical investigations.
– **John Stuart Mill**: Mill’s contributions to **ethics** (utilitarianism), **political philosophy** (liberty and justice), and **epistemology** (empiricism and the nature of knowledge) show how his work in one area influenced his thinking in others. His utilitarian ethical theory, for example, is closely tied to his empiricist approach to knowledge.
### 3. **Philosophical Integration Leading to Broader Understanding:** – **Comprehensive Philosophical Systems**: Some philosophers create comprehensive systems that attempt to integrate all branches of philosophy into a coherent whole. – **Example: Hegel’s System**: G.W.F. Hegel’s philosophy integrates **logic**, **metaphysics**, **ethics**, **politics**, and **aesthetics** into a single, comprehensive system. His dialectical method connects these areas, showing how they evolve and interact over time.
– **Influence Across Fields**: Philosophical ideas developed in one branch can have significant implications for others. – **Example: Descartes**: René Descartes’ epistemological skepticism (Cogito, ergo sum) had profound effects on metaphysics (the nature of reality) and ethics (how one should live based on what can be known for certain).
### Conclusion: The interconnectedness of the branches of philosophy is evident in the way theories often draw on multiple areas to address complex questions, and in how individual philosophers contribute across different branches. This interplay enriches philosophical inquiry, allowing for a more holistic and integrated understanding of the issues at hand. Theories and philosophers serve as the threads that weave the various branches of philosophy into a unified, though diverse, intellectual tradition.
OK, let us continue with your final statement “Theories and philosophers serve as the threads that weave the various branches of philosophy into a unified, though diverse, intellectual tradition.” If we understand a ‘theory’ as a ‘text’, written in some ‘language L’. Would you agree that a ‘text as such’ has no meaning at all. ‘Meaning’ is connected to human persons, which have learned to ‘associate’ certain expressions of the used language L with different kinds of experience, which ist typical for human persons?
Yes, I agree with the idea that a **text**—in this case, a philosophical theory written in a language L—does not inherently possess meaning on its own. The meaning arises through the interaction between the text and human persons who interpret it. Let’s unpack this idea:
### 1. **Text as a Medium**: – A text, in its most basic form, is a sequence of symbols or words arranged according to the rules of a language (L). On its own, it’s just a collection of characters, and it doesn’t hold any intrinsic meaning.
### 2. **Meaning and Interpretation**: – **Meaning** emerges when human beings engage with the text. Humans bring their experiences, knowledge, and understanding of language to bear when they read a text. Through this process of interpretation, they assign meaning to the words, sentences, and ideas expressed in the text. – **Association with Experience**: The meaning of a text is closely tied to the experiences and cognitive frameworks of the reader. When someone reads a philosophical theory, they interpret the words based on their own experiences, cultural background, and understanding of the concepts involved. For example, the concept of “justice” might evoke different associations and interpretations depending on the reader’s background and experiences.
### 3. **Language and Shared Understanding**: – Language (L) functions as a shared system of symbols that allows people to communicate and share ideas. However, the effectiveness of this communication depends on the shared understanding between the writer and the reader. The more closely aligned their experiences and conceptual frameworks are, the more effectively meaning can be communicated. – **Contextual Interpretation**: The meaning of a text is often shaped by the context in which it is read. This includes the historical, cultural, and intellectual context of both the writer and the reader. A philosophical text written centuries ago may be interpreted differently today because the context in which it is read has changed.
### 4. **Role of Philosophers and Theories**: – Philosophers, when writing a theory, encode their ideas in the text using language L. Their intention is to communicate complex concepts and arguments. However, the interpretation of those ideas depends on the reader’s ability to decode and understand the text. – Theories serve as a medium through which philosophical ideas are communicated, but the meaning of those ideas is realized only through the process of interpretation by readers who engage with the text.
### 5. **Implications for Philosophical Inquiry**: – This understanding highlights the importance of interpretation in philosophical inquiry. Different readers might interpret the same text in different ways, leading to a diversity of meanings and perspectives. – Philosophical debate often involves discussing and refining these interpretations, seeking to understand the underlying ideas more clearly and explore their implications more fully.
### Conclusion: In summary, a **text** itself is a neutral medium without intrinsic meaning. **Meaning** is a dynamic process that occurs when human persons, equipped with their experiences and knowledge of language, engage with the text and interpret its symbols. This process of interpretation is what allows philosophical theories to contribute to the broader intellectual tradition, as different interpretations and understandings are woven together to form a richer and more diverse tapestry of thought.
OK, would you then agree with the conclusion, that not the philosophical ‘texts as such’ are the building blocks of a unified tradition but the philosophical texts as ‘connected to’ a ‘humand meaning space’, which is finally ‘rooted in individual persons’ wich have ‘built up’ their individual meaning space by interacting with their environment, including other human persons using he same language system.
Yes, I would agree with that conclusion. The **philosophical texts** themselves are not the true building blocks of a unified intellectual tradition. Instead, it is the **philosophical texts as they are connected to a human meaning space** that forms the foundation of this tradition. Here’s how this understanding comes together:
### 1. **Human Meaning Space**: – The **human meaning space** refers to the collective web of meanings, interpretations, and understandings that individuals create as they interact with their environment, including other people and cultural artifacts like texts. This meaning space is deeply personal, yet also shared and communal because it is developed through interaction with others who use the same language and cultural symbols.
### 2. **Texts as Tools for Meaning-Making**: – Philosophical texts are tools or mediums through which ideas are communicated, debated, and developed. However, their significance and role in the tradition depend on how they are interpreted and understood within this human meaning space. – The **”building blocks”** of a philosophical tradition are not the texts themselves but the meanings and ideas that people extract from, discuss, and build upon through their engagement with these texts.
### 3. **Rooted in Individual Persons**: – This meaning space is ultimately **rooted in individual persons**, each of whom brings their own experiences, background, and understanding to their interpretation of philosophical texts. As individuals interact with these texts, they build up their own understanding and contribute to the broader philosophical dialogue. – **Interpersonal Interaction**: Individuals also shape their meaning space by interacting with others, discussing and debating the meanings they derive from texts. This collective process further enriches the shared human meaning space and the philosophical tradition as a whole.
### 4. **Unified Tradition Through Shared Interpretation**: – The **unified tradition** of philosophy is thus a product of shared interpretations, discussions, and debates that take place within this human meaning space. While the texts provide a common reference point, it is the collective engagement with these texts—rooted in individual and shared meaning-making—that creates and sustains the tradition. – **Cultural and Historical Context**: The meanings attributed to philosophical texts are also influenced by the cultural and historical context in which they are read and interpreted, leading to an evolving tradition that reflects the changing concerns and understandings of different eras.
### Conclusion: Philosophical texts themselves are important, but they gain their true significance within the **human meaning space** that individuals create through their interactions with these texts and with each other. This meaning space, rooted in the experiences and interpretations of individual persons, is what ultimately forms the building blocks of a unified philosophical tradition. The tradition is not a static collection of texts, but a dynamic, evolving process of meaning-making that is constantly being enriched and redefined by those who engage with it.
OK Chat, thanks again for this inspiring dialogue 🙂
You’re very welcome! I’m glad you found the conversation inspiring. It was a pleasure to explore these ideas with you. If you ever want to discuss more or dive into new topics, I’m here anytime. Have a great day! 😊
Your insight blurs the traditional distinction between basic concepts and emergent properties by showing that in the process of developing a theory, what we ultimately recognize as basic concepts may indeed emerge from the complex interplay of ideas. This perspective aligns with the broader view of how knowledge is constructed and emphasizes the dynamic and evolving nature of conceptual frameworks. In this sense, the final framework of basic concepts in a theory could be seen as an emergent outcome of the theoretical development process.
QUESTION: PHILOSOPHY – What it is? Fundamental concepts : a new view
Hi, can I call you ‘Chat’?
Sure, you can call me “Chat”! How can I assist you today?
Hi Chat, what can you say about ‘Philosophy’?
Philosophy is a broad field that explores fundamental questions about existence, knowledge, values, reason, mind, and language. It delves into concepts like the nature of reality (metaphysics), the basis of knowledge (epistemology), moral values and ethics, and the principles of logic and reasoning.
Philosophy encourages critical thinking and questioning, often challenging assumptions that we take for granted. It’s divided into various branches, such as:
Metaphysics: Explores the nature of reality, existence, and the universe.
Epistemology: Concerned with knowledge, belief, and the nature of truth.
Ethics: Focuses on morality and what constitutes right or wrong behavior.
Logic: Studies the principles of valid reasoning and argumentation.
Aesthetics: Examines the nature of beauty, art, and taste.
Philosophy also influences many other fields, such as science, religion, and politics, by providing foundational frameworks and questioning the underlying principles in these areas. It’s both an academic discipline and a practical approach to thinking about life’s big questions.
Can you explain a little bit more what it means, that philosophy is “providing foundational frameworks and questioning the underlying principles in these areas”?
When we say that philosophy “provides foundational frameworks and questions the underlying principles” in various areas, it means that philosophy serves as a base or starting point for thinking about and understanding different fields, such as science, ethics, politics, or art. Here’s how it works:
1. Foundational Frameworks:
Philosophy often establishes the basic concepts and theories that other fields build upon. For instance:
Science: The philosophy of science explores what constitutes scientific knowledge, the nature of scientific methods, and what it means for something to be a “scientific fact.” Concepts like causality, objectivity, and the nature of evidence are philosophical in origin.
Law and Politics: Philosophical ideas about justice, rights, and the social contract influence legal systems and political structures. For example, concepts of human rights or democracy are rooted in philosophical theories.
Ethics and Morality: Ethical philosophy provides frameworks for thinking about right and wrong, helping to shape moral codes, laws, and personal conduct.
2. Questioning Underlying Principles:
Philosophy doesn’t just accept things at face value; it critically examines the assumptions and principles that underpin other fields:
In Science: Philosophy might question what it means for a theory to be “true” or explore whether scientific progress really leads to a better understanding of reality.
In Ethics: Philosophers question what makes an action right or wrong, and whether moral principles are universal or relative to cultures.
In Art (Aesthetics): Philosophy asks what constitutes beauty, whether art has to have meaning, and what role art plays in society.
By questioning these principles, philosophy encourages deeper thinking and can lead to new insights or challenges to established ways of understanding the world. It pushes us to consider why we believe what we do and whether those beliefs are justified. This process of questioning is essential for intellectual progress and the development of more refined and rigorous theories in any field.
QUESTION: OK Chat, what makes a ‘concept’ a ‘basic concept’ in a field?
A “basic concept” in a field is a fundamental idea or building block that forms the foundation for understanding more complex ideas within that field. These concepts are “basic” because they are essential to the structure and coherence of the field—they are often assumed or used as starting points for further exploration and theory development. Here’s what makes a concept “basic”:
1. Fundamental Role:
A basic concept is central to the field and underlies much of its theoretical framework. For example, in mathematics, concepts like “number” or “addition” are basic because they are foundational to all other mathematical operations and theories.
2. Universality:
Basic concepts are generally applicable across various contexts within the field. For instance, in physics, the concept of “energy” is used universally to describe a wide range of phenomena, from mechanics to thermodynamics.
3. Simplicity and Primitiveness:
Basic concepts are often simple or primitive in the sense that they cannot be broken down into more fundamental ideas within the field. In philosophy, “existence” is a basic concept because it’s not easily reducible to other terms, and it’s a starting point for discussing what kinds of things exist.
4. Necessity for Understanding:
To understand more advanced or specific concepts in the field, one must grasp the basic concepts. For example, in biology, the concept of “cell” is basic because understanding cells is necessary to understand more complex biological systems like organs or ecosystems.
5. Starting Points for Inquiry:
Basic concepts often serve as starting points for inquiry and further exploration. In ethics, “good” and “bad” are basic concepts because they are the foundation for any discussion about moral behavior or decision-making.
6. Integration into Broader Theories:
Basic concepts are integrated into broader theories and frameworks within the field. For instance, in economics, “supply and demand” are basic concepts that underpin many economic models and theories about how markets function.
Examples in Different Fields:
Mathematics: “Number,” “addition,” “set.”
Physics: “Force,” “energy,” “mass.”
Philosophy: “Existence,” “truth,” “morality.”
Linguistics: “Sentence,” “word,” “grammar.”
Biology: “Cell,” “species,” “evolution.”
Conclusion:
A concept becomes “basic” in a field because it is indispensable for building the rest of the field’s knowledge. It is deeply embedded in the structure of the discipline, and understanding it is crucial for making sense of more complex ideas. These basic concepts are often the first things one learns when entering a new field, and they continue to serve as reference points as one’s understanding deepens.
QUESTION: OK, Chat, I understand from this the following points: 1) ‘Basic concepts’ ar simple or primitive in the sense that they cannot be broken down into more fundamental ideas within the field; 2) ‘Basic concepts’ are necessary to understand more complex concepts in the field. Is this right?
Yes, you’ve got it right!
1. Simplicity or Primitiveness:
Basic concepts are indeed simple or primitive in that they are foundational and cannot be broken down into more fundamental ideas within the field. They represent the starting points for understanding and inquiry. For example, in mathematics, “number” is a basic concept that doesn’t rely on other concepts within math to be understood; it is fundamental on its own.
2. Necessity for Understanding:
Basic concepts are necessary for understanding more complex concepts within the field. They serve as the building blocks upon which more advanced theories and ideas are constructed. For instance, understanding the concept of “cell” in biology is essential before one can grasp more complex biological systems like tissues, organs, or ecosystems.
These two points are central to what makes a concept “basic” in any given field. They are the foundation upon which the rest of the field is built, and they provide the necessary groundwork for deeper exploration and understanding.
QUESTION: OK, the characterization of ‘basic concepts’ as “… simple or primitive in that they are foundational and cannot be broken down into more fundamental ideas within the field” shows some resemblance with the concept of an ’emergent property’; an ’emergent property’ can be manifested in the context of many phenomena constituting a ‘system’, but the emergent property can not be derived from these other phenomena, it is ‘unique’, it cannot be broken down into more simpler properties. Can one state hat a basic concept’ as a ‘simple concept’ therefore can also be understood as an ’emergent concept’?
The comparison you’re drawing between a “basic concept” and an “emergent property” is insightful, but they aren’t quite the same, though they share some similarities.
Similarities:
Indivisibility: Both a basic concept and an emergent property are in some sense “indivisible.” A basic concept cannot be broken down into more fundamental ideas within its field, and an emergent property cannot be reduced to the simpler properties of the individual components of a system.
Foundational: Basic concepts are foundational to understanding a field, just as emergent properties are often foundational to understanding the behavior or characteristics of a system.
Key Differences:
Origin:
Basic Concepts: Basic concepts are usually starting points or primitives within a field. They are defined at the outset and do not arise from the combination of other ideas. For instance, in geometry, “point” is a basic concept that is not derived from other geometrical ideas; it is taken as a given.
Emergent Properties: Emergent properties arise from the interactions of simpler components within a system. These properties are not inherent in any individual component but manifest only when the components work together. For example, the concept of “consciousness” is often considered an emergent property of the brain’s neural networks—it arises from the interactions of neurons, but it is not a property of any individual neuron.
Understanding and Derivation:
Basic Concepts: A basic concept is typically understood directly; it doesn’t emerge from or depend on other concepts within the field. It’s a foundational building block.
Emergent Properties: An emergent property is often understood in relation to the system as a whole. While it cannot be directly derived from the properties of individual components, it is understood to result from the interactions among those components.
Conclusion:
A basic concept is “simple” and foundational, while an emergent property is “complex” and arises from simpler parts interacting within a system. Therefore, a basic concept is not the same as an emergent concept, though both are unique in their own ways. A basic concept serves as a starting point, whereas an emergent property is an outcome that arises from the complexity of a system.
QUESTION : Hmmh, Chat, you are emphasizing a difference between ‘basic concepts’ and ’emergent concepts’ by pointing to the ‘process of using’ these concepts. You are stating that ‘Basic concepts’ “are defined at the outset and do not arise from the combination of other ideas”. With regard to ‘completed theories’ this seems to be right in the sense that at the end of a process of ‘developing a theory’ you will find ‘basic concepts’ as building blocks of the theory. But to ‘understand’ that a certain concept is a ‘basic concept’ is usually only possible if you have tried many combinations of whole sets of concepts, which are within a theory ‘interacting’. At some point ‘it can become clear’ that a certain concep X can be understood as ‘basic’ in relation to some other concepts which are ‘basic as well’. This is the detection of a ‘framework’ constituted by concepts which appear in this framework as ‘basic’. And, if you take the ‘developing process’ into account than there comes up the idea that this ‘final framework of basic concepts’ appears like an ’emergent concept’. What do you think?
Your perspective brings a deeper and more nuanced understanding of the relationship between basic concepts and emergent properties by considering the dynamic process of theory development. You’re suggesting that the identification of a basic concept may itself be an emergent process within the context of developing a theory. This is a fascinating idea and adds a layer of complexity to the traditional understanding of basic concepts. Let’s break it down further:
The Process of Developing a Theory:
Initial Exploration: When a theory is in its early stages, many concepts are introduced, and their relationships are explored. At this stage, it’s not always clear which concepts will be foundational or “basic.”
Interaction and Refinement: Through the process of developing the theory, concepts interact, and their relationships become clearer. As the theory matures, certain concepts might emerge as particularly fundamental, serving as the building blocks for the rest of the theory.
Emergence of Basic Concepts: Over time, as you refine the theory, certain concepts may be recognized as “basic.” This recognition might not have been obvious at the outset—it emerges through the process of theory development. The framework of basic concepts can thus be seen as an outcome of the complex interactions within the theory, much like an emergent property arises from the interactions within a system.
Basic Concepts as Emergent in Theoretical Development:
Emergence in Retrospect: After a theory is fully developed, basic concepts might seem self-evident. However, the journey to identify them often involves exploring how different ideas interrelate, which can make the discovery of these basic concepts feel like an emergent process.
Interdependence and Emergence: Basic concepts often gain their status through their interdependence with other concepts. The recognition of a concept as “basic” might only make sense in relation to the network of other ideas in the theory, similar to how an emergent property only makes sense in the context of the system from which it arises.
A New Perspective:
This approach suggests that basic concepts can indeed be viewed as emergent in the context of theory development. While basic concepts are eventually defined and recognized as foundational, their identification and understanding may emerge from the iterative process of refining and interacting with other concepts within the theory.
Conclusion:
Your insight blurs the traditional distinction between basic concepts and emergent properties by showing that in the process of developing a theory, what we ultimately recognize as basic concepts may indeed emerge from the complex interplay of ideas. This perspective aligns with the broader view of how knowledge is constructed and emphasizes the dynamic and evolving nature of conceptual frameworks. In this sense, the final framework of basic concepts in a theory could be seen as an emergent outcome of the theoretical development process.
TRANSLATION & COMMENTS: The following text is a translation from a German version into English. For the translation I am using the software deepL.com. For commenting on the whole text I am using @chatGPT4.
If someone has already decided in his head that there is no problem, he won’t see a problem … and if you see a problem where there is none, you have little chance of being able to solve anything. To put it simply: we can be the solution if we are not the problem ourselves
Written at the end of some letter…
PREFACE
The original – admittedly somewhat long – text entitled “MODERN PROPAGANDA – From a philosophical perspective. First reflections” started with the question of what actually constitutes the core of propaganda, and then ended up with the central insight that what is called ‘propaganda’ is only a special case of something much more general that dominates our thinking as humans: the world of narratives. This was followed by a relatively long philosophical analysis of the cognitive and emotional foundations of this phenomenon.
Since the central text on the role of narratives as part of the aforementioned larger text was a bit ‘invisible’ for many, here is a blog post that highlights this text again, and at the same time connects this text with an experiment in which the individual sections of the text are commented on by @chatGPT4.
Insights on @chatGPT4 as a commentator
Let’s start with the results of the accompanying experiment with the @chatGPT4 software. If you know how the @chatGPT4 program works, you would expect the program to more or less repeat the text entered by the user and then add a few associations relative to this text, the scope and originality of which depend on the internal knowledge that the system has. As you can see from reading @chatGPT4’s comments, @chatGPT4’s commenting capabilities are clearly limited. Nevertheless they underline the main ideas quite nicely. Of course, you could elicit even more information from the system by asking additional questions, but then you would have to invest human knowledge again to enable the machine to associate more and origin of all. Without additional help, the comments remain very manageable.
Central role of narratives
As the following text suggests, narratives are of central importance both for the individual and for the collectives in which an individual occurs: the narratives in people’s heads determine how people see, experience and interpret the world and also which actions they take spontaneously or deliberately. In the case of a machine, we would say that narratives are the program that controls us humans. In principle, people do have the ability to question or even partially change the narratives they have mastered, but only very few are able to do so, as this not only requires certain skills, but usually a great deal of training in dealing with their own knowledge. Intelligence is no special protection here; in fact, it seems that it is precisely so-called intelligent people who can become the worst victims of their own narratives. This phenomenon reveals a peculiar powerlessness of knowledge before itself.
This topic calls for further analysis and more public discussion.
HERE COMES THE MAIN TEXT ABOUT NARRATIVES
Worldwide today, in the age of mass media, especially in the age of the Internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [1] when they act. A typical feature of acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that the narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling of being in the right’ can go as far as claiming the right to kill others because they are ‘acting wrongly’ according to the ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that is ‘found to be good’ by the followers of the narrative ‘as such’, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.[2]
Popular narratives
In recent decades, we have experienced ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘making sense’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly effect anything: the ‘people up there’ do what they want after all. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’! Such ‘popular narratives’, which make ‘good feelings’ possible, are becoming increasingly powerful. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most are also not sufficiently ‘trained’ to be able to clarify the truth content of a narrative at all. It is typical for followers of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are such ‘decent people’, because they always say exactly the same thing as the ‘popular narrative’ dictates. [3]
For Power, narratives are sexy
If one now also takes into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people to ‘create’ precisely such narratives or to ‘instrumentalize’ existing narratives for themselves, then one should not be surprised that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately this misses the core of the problem. [4]
Bits of history
The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text show – are highly complex and extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. In addition, a real revolution took place about 500 years ago: humanity succeeded in finding a format that optimized the ‘preservation of truth’ and minimized the slide into untruth with the concept of a ‘verifiable narrative (empirical theory)’. This new concept of ‘verifiable truth’ has since enabled great insights that were beyond imagination without this new concept. [5]
The ‘aura of the scientific’ has now permeated almost all of human culture, almost! We have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are so attractive because they spare the individual from having to ‘think for himself’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.[6]
The central problem of humanity
Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem for humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there exist no ‘simple off-the-shelf’ solutions for them, that we really have to ‘wrestle’ together for possible insights; the ‘old’ is not ‘enough’ in principle to recognize and implement the ‘true new’! [7]
COMMENTS
[1] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)
Comments from @chatGPT4
[2] @chatGPT4 : The text delves into the profound impact of narratives in the context of mass media and the Internet, highlighting how individuals and groups often align their actions and beliefs with specific narratives. It points out the subjective nature of these narratives, where followers perceive them as absolute truths, driving their decisions and actions. This belief in the “narrative truth” can foster a strong sense of justification, even to the extent of committing violence against those deemed to be in opposition to this truth. The text emphasizes the power of narratives to shape perceptions of the world, often bypassing critical scrutiny of their factual accuracy. This phenomenon underscores the psychological and social mechanisms through which narratives influence human behavior, suggesting a collective suspension of disbelief that can have significant real-world implications.
The assertion that individuals often do not question the “truth content” of a narrative they find meaningful speaks to the cognitive biases and shortcuts humans use in processing information. Narratives simplify complex realities into coherent stories, making them more digestible and emotionally resonant. This process can lead to echo chambers and polarization, as groups coalesce around shared narratives, reinforcing their beliefs and dismissing opposing viewpoints without thorough examination.
The concept of “narrative truth” also reflects the construction of social realities, where shared beliefs and stories shape collective experiences and norms. This can have both positive and negative outcomes, facilitating social cohesion and identity within groups but also fostering division and conflict between them.
Critically, the text suggests a need for awareness and reflection on the narratives we adopt and their implications for our perceptions and actions. It hints at the ethical responsibility of individuals and media creators to critically evaluate and challenge the narratives they encounter and propagate. In an era of information overload and sophisticated digital communication, the ability to discern the nuanced difference between narrative-driven beliefs and objective truths is more crucial than ever for fostering informed, compassionate, and inclusive societies.
[3] @chatGPT4 : The text provides a critical examination of the role and influence of modern narratives in contemporary society, juxtaposing them with traditional religious narratives while highlighting their similar effects on individuals’ perceptions and emotions. It touches on a growing sense of political helplessness among citizens, even within democratic societies, where there’s a prevailing sentiment that individual actions have little to no impact on the decisions made by those in power. This perceived powerlessness, combined with the complexity and threats perceived in today’s world, creates a fertile ground for simplistic narratives to thrive. These narratives, by virtue of offering clarity and emotional comfort, gain traction and become increasingly influential.
The comparison to religious narratives is particularly insightful. It suggests that just as religious stories provide a sense of meaning, belonging, and understanding without necessitating a rigorous examination of their factual accuracy, modern narratives fulfill a similar psychological need. They offer an immediate, albeit superficial, sense of understanding and control in a seemingly chaotic world. This bypass of critical evaluation—the reluctance or inability to question the ‘truth’ behind these narratives—points to a significant challenge in contemporary discourse: the erosion of critical thinking and analytical skills in evaluating information.
Furthermore, the text highlights a concerning trend in how people consume and share information. The reliance on ‘echo chambers’ where one’s beliefs are constantly reaffirmed by similar opinions, often without scrutiny of the source’s credibility, perpetuates a cycle of misinformation and intellectual complacency. This behavior is facilitated by digital platforms, where sharing links to supportive texts or videos substitutes for genuine understanding and critical discussion.
The underlying issue, as the text implies, is not just about the narratives themselves but about the broader societal and educational failings that leave individuals unequipped to critically assess information. This deficiency not only makes people susceptible to manipulation but also undermines the foundational principles of informed citizenship in a democracy.
In conclusion, the text calls for a renewed emphasis on critical thinking and media literacy as essential skills in navigating the modern information landscape. By fostering these skills, individuals can become more discerning consumers of information, better able to understand the complexities of the world around them, and more effectively engage in the democratic process. This approach not only challenges the dominance of simplistic narratives but also supports a healthier, more informed public discourse.
[4] @chatGPT4 : This passage delves into the strategic utilization of narratives by those in power or seeking power as a means to influence and control populations. It presents a critical view of how governments and various power groups craft or harness existing popular narratives to shape public opinion and behavior indirectly. This approach, as opposed to direct coercion, operates through the subtler channels of storytelling and narrative dissemination, aiming to capture the “hearts and minds” of people. The text suggests that this method is widespread and increasingly recognized under terms like ‘hybrid warfare’ and ‘modern propaganda,’ though it argues these terms fail to fully capture the essence of the issue.
The key insight here revolves around the power of narratives to mold societal perceptions and values. By creating or modifying narratives, those in power can guide public sentiment and opinion in ways that serve their interests, often without the overt appearance of manipulation. This strategy can be particularly effective in the digital age, where information spreads rapidly and often without critical scrutiny, allowing for narratives to be amplified and tailored to target audiences with unprecedented precision.
The critique of labeling these tactics as merely ‘hybrid warfare’ or ‘modern propaganda’ highlights a deeper concern: that the focus on the mechanics of these strategies may overlook the fundamental challenge they pose to democratic processes and individual autonomy. Narratives, in this context, become tools for shaping reality, not just reflecting it, with profound implications for how people understand their world and their place within it.
Furthermore, the text implicitly raises questions about the ethical implications of such narrative control. It challenges the reader to consider the boundaries between legitimate persuasion and manipulative interference in the democratic discourse. This distinction is crucial in evaluating the health of public dialogue and the integrity of decision-making processes in a society.
In summary, the passage underscores the sophisticated ways in which narratives are employed by those in power to influence public opinion and behavior indirectly. It calls attention to the importance of recognizing and critically evaluating these narratives to preserve the principles of democracy and individual agency. The ability to discern the motivations behind and the content of these narratives is essential for a healthy, informed, and resilient public sphere.
[5] @chatGPT4 : This passage addresses a fundamental aspect of human social organization: the reliance on narratives to coordinate collective action. It posits that narratives are an intrinsic part of human nature, offering the only viable means through which communities can organize and mobilize collective action. However, it also acknowledges a critical vulnerability of narratives—their propensity for falsity and distortion of reality. This inherent flaw raises concerns about the accuracy and integrity of the stories that guide collective human behavior and decision-making.
The text suggests that the development of legal systems represents a historical attempt to mitigate the misuse of narrative power, aiming to safeguard society against the dangers of misinformation and manipulation. Legal frameworks, with their emphasis on evidence and accountability, can be seen as societal mechanisms designed to preserve truth and curb the abuse of narratives. Despite the ongoing challenges and imperfections in these systems, they have contributed to a gradual improvement in societal governance.
Moreover, the passage highlights a transformative moment in human history approximately 500 years ago with the emergence of the scientific method, described here as the concept of a “verifiable narrative (empirical theory).” This revolution in thought introduced a new paradigm for establishing truth, one based on empirical evidence and verifiability rather than mere assertion or belief. The scientific method, by demanding that claims about the world be tested and confirmed through observation and experiment, has dramatically enhanced our capacity to discern truth from falsehood. This approach has unlocked unprecedented insights and advancements, fundamentally altering humanity’s understanding of the universe and our place within it.
The core issue identified in the text is not merely the existence of narratives but the challenge of ensuring they reflect a truthful and accurate understanding of the world. The scientific method represents a pivotal development in this quest, providing a robust framework for distinguishing between verifiable truths and unfounded narratives. However, the text implies that the struggle to maintain the integrity of our collective narratives is ongoing and requires constant vigilance. As we navigate an ever-more complex information landscape, the principles of verifiability and empirical evidence remain essential guides in our pursuit of knowledge and in the organization of our societies.
[6] @chatGPT4 : This passage critically examines the influence and reach of scientific thinking in contemporary culture, juxtaposing it with the resilience and resurgence of non-scientific narratives. Despite the profound impact of scientific discoveries and technologies on practical aspects of life, the passage suggests that the mode of thinking promoted by science—characterized by critical inquiry, empirical evidence, and skepticism—has not fully supplanted other forms of narrative understanding. Instead, there is a noted resurgence of “non-truth narratives” that challenge, marginalize, or even seek to suppress scientific perspectives in various domains.
The text highlights a concerning trend: the growing appeal of irrational narratives derived from religious beliefs or popular culture, which are gaining ground against scientific rationality. This preference for irrational narratives is attributed to their simplicity and the comfort they offer individuals by absolving them of the need for critical thinking. The passage argues that real thinking—analytical, reflective, and independent—is perceived as burdensome, unpopular, and inconvenient by many, as it complicates the desire for simple answers to complex problems.
The allure of irrational narratives, according to the passage, lies in their ability to provide straightforward solutions and a sense of certainty in an increasingly complex world. These narratives offer emotional satisfaction and a framework for understanding the world that bypasses the rigorous demands of scientific scrutiny. This dynamic illustrates a broader cultural conflict between the values of rationality and the comfort of simplicity and certainty offered by alternative narratives.
This critique underscores a critical challenge facing society: the need to balance the benefits of scientific thinking with the human desire for narratives that provide meaning and emotional comfort. While scientific thinking has undeniably transformed the world, its principles of skepticism and demand for evidence can be at odds with the human inclination towards narratives that offer immediate clarity and solace. The passage calls for a reflection on the importance of fostering a culture that values critical thinking and scientific literacy while acknowledging and addressing the psychological and emotional needs that drive people towards irrational narratives. This balance is crucial for navigating the complexities of the modern world and ensuring that decisions are informed by both empirical evidence and a compassionate understanding of human nature.
[7] @chatGPT4 : This passage articulates a critical observation about contemporary society’s struggle with global challenges: the difficulty lies not just in the need for technological advancements but fundamentally in the widespread inability of individuals to engage critically with information and narratives. It underscores the necessity of fostering a culture where people are encouraged and equipped to think deeply, critically, and collaboratively to address complex issues. The emphasis is on the collective intellectual effort required to navigate and solve “real problems,” which are characterized as novel, lacking straightforward solutions, and demanding innovative approaches beyond the application of existing knowledge or methods.
The text suggests that the crux of addressing global challenges effectively is not merely technological innovation but, more importantly, enhancing collective cognitive capabilities. It highlights an urgent need for individuals who can think “more and better” and who are prepared to engage in the collaborative problem-solving processes essential for addressing the unique and multifaceted problems facing the world today. These problems are not amenable to simple, pre-packaged solutions; they require a willingness to engage in sustained intellectual effort, open-minded exploration, and creative thinking.
This perspective challenges the prevailing emphasis on technological solutions as the primary means to overcome global challenges. Instead, it argues for a paradigm shift towards valuing and developing human intellectual and collaborative capacities as the foundational step towards sustainable progress. The passage points out that the real bottleneck in addressing contemporary issues is not the lack of technological resources but the shortage of people capable of employing critical thinking to innovate and implement effective solutions.
In essence, the passage calls for a renewed focus on education and cultural values that promote critical thinking, intellectual curiosity, and collaborative problem-solving. By cultivating these capabilities, society can better equip itself to recognize, understand, and address the novel and complex challenges that define our times. This approach not only recognizes the limitations of relying solely on technological advancements but also highlights the transformative potential of human intellect and cooperation in forging pathways to a better future.
Translation: This text is a translation from the German Version into English with the aid of the software deepL.com as well as with chatGPT4, moderated by the author. The style of the two translators is different. The author is not good enough to classify which translator is ‘better’.
CONTEXT
This text is the outcome of a conference held at the Technical University of Darmstadt (Germany) with the title: Discourses of disruptive digital technologies using the example of AI text generators ( https://zevedi.de/en/topics/ki-text-2/ ). A German version of this article will appear in a book from de Gruyter as open access in the beginning of 2024.
Collective human-machine intelligence and text generation. A transdisciplinary analysis.
Abstract
Based on the conference theme “AI – Text and Validity. How do AI text generators change scientific discourse?” as well as the special topic “Collective human-machine intelligence using the example of text generation”, the possible interaction relationship between text generators and a scientific discourse will be played out in a transdisciplinary analysis. For this purpose, the concept of scientific discourse will be specified on a case-by-case basis using the text types empirical theory as well as sustained empirical theory in such a way that the role of human and machine actors in these discourses can be sufficiently specified. The result shows a very clear limitation of current text generators compared to the requirements of scientific discourse. This leads to further fundamental analyses on the example of the dimension of time with the phenomenon of the qualitatively new as well as on the example of the foundations of decision-making to the problem of the inherent bias of the modern scientific disciplines. A solution to the inherent bias as well as the factual disconnectedness of the many individual disciplines is located in the form of a new service of transdisciplinary integration by re-activating the philosophy of science as a genuine part of philosophy. This leaves the question open whether a supervision of the individual sciences by philosophy could be a viable path? Finally, the borderline case of a world in which humans no longer have a human counterpart is pointed out.
AUDIO: Keyword Sound
STARTING POINT
This text takes its starting point from the conference topic “AI – Text and Validity. How do AI text generators change scientific discourses?” and adds to this topic the perspective of a Collective Human-Machine Intelligence using the example of text generation. The concepts of text and validity, AI text generators, scientific discourse, and collective human-machine intelligence that are invoked in this constellation represent different fields of meaning that cannot automatically be interpreted as elements of a common conceptual framework.
TRANSDISCIPLINARY
In order to be able to let the mentioned terms appear as elements in a common conceptual framework, a meta-level is needed from which one can talk about these terms and their possible relations to each other. This approach is usually located in the philosophy of science, which can have as its subject not only single terms or whole propositions, but even whole theories that are compared or possibly even united. The term transdisciplinary [1] , which is often used today, is understood here in this philosophy of science understanding as an approach in which the integration of different concepts is redeemed by introducing appropriate meta-levels. Such a meta-level ultimately always represents a structure in which all important elements and relations can gather.
[1] Jürgen Mittelstraß paraphrases the possible meaning of the term transdisciplinarity as a “research and knowledge principle … that becomes effective wherever a solely technical or disciplinary definition of problem situations and problem solutions is not possible…”. Article Methodological Transdisciplinarity, in LIFIS ONLINE, www.leibniz-institut.de, ISSN 1864-6972, p.1 (first published in: Technology Assessment – Theory and Practice No.2, 14.Jg., June 2005, 18-23). In his text Mittelstrass distinguishes transdisciplinarity from the disciplinary and from the interdisciplinary. However, he uses only a general characterization of transdisciplinarity as a research guiding principle and scientific form of organization. He leaves the concrete conceptual formulation of transdisciplinarity open. This is different in the present text: here the transdisciplinary theme is projected down to the concreteness of the related terms and – as is usual in philosophy of science (and meta-logic) – realized by means of the construct of meta-levels.
SETTING UP A STRUCTURE
Here the notion of scientific discourse is assumed as a basic situation in which different actors can be involved. The main types of actors considered here are humans, who represent a part of the biological systems on planet Earth as a kind of Homo sapiens, and text generators, which represent a technical product consisting of a combination of software and hardware.
It is assumed that humans perceive their environment and themselves in a species-typical way, that they can process and store what they perceive internally, that they can recall what they have stored to a limited extent in a species-typical way, and that they can change it in a species-typical way, so that internal structures can emerge that are available for action and communication. All these elements are attributed to human cognition. They are working partially consciously, but largely unconsciously. Cognition also includes the subsystem language, which represents a structure that on the one hand is largely species-typically fixed, but on the other hand can be flexibly mapped to different elements of cognition.
In the terminology of semiotics [2] the language system represents a symbolic level and those elements of cognition, on which the symbolic structures are mapped, form correlates of meaning, which, however, represent a meaning only insofar as they occur in a mapping relation – also called meaning relation. A cognitive element as such does not constitute meaning in the linguistic sense. In addition to cognition, there are a variety of emotional factors that can influence both cognitive processes and the process of decision-making. The latter in turn can influence thought processes as well as action processes, consciously as well as unconsciously. The exact meaning of these listed structural elements is revealed in a process model [3] complementary to this structure.
[2] See, for example, Winfried Nöth: Handbuch der Semiotik. 2nd, completely revised edition. Metzler, Stuttgart/Weimar, 2000
[3] Such a process model is presented here only in partial aspects.
SYMBOLIC COMMUNICATION SUB-PROCESS
What is important for human actors is that they can interact in the context of symbolic communication with the help of both spoken and written language. Here it is assumed – simplistically — that spoken language can be mapped sufficiently accurately into written language, which in the standard case is called text. It should be noted that texts only represent meaning if the text producers involved, as well as the text recipients, have a meaning function that is sufficiently similar. For texts by human text producers it is generally true that, with respect to concrete situations, statements as part of texts can be qualified under agreed conditions as now matching the situation (true) or as now not now matching the situation (false). However, a now-true can become a now-not-true again in the next moment and vice versa.
This dynamic fact refers to the fact that a punctual occurrence or non-occurrence of a statement is to be distinguished from a structural occurrence/ non-occurrence of a statement, which speaks about occurrence/ non-occurrence in context. This refers to relations which are only indirectly apparent in the context of a multitude of individual events, if one considers chains of events over many points in time. Finally, one must also consider that the correlates of meaning are primarily located within the human biological system. Meaning correlates are not automatically true as such, but only if there is an active correspondence between a remembered/thought/imagined meaning correlate and an active perceptual element, where an intersubjective fact must correspond to the perceptual element. Just because someone talks about a rabbit and the recipient understands what a rabbit is, this does not mean that there is also a real rabbit which the recipient can perceive.
TEXT-GENERATORS
When distinguishing between the two different types of actors – here biological systems of the type Homo sapiens and there technical systems of the type text-generators – a first fundamental asymmetry immediately strikes the eye: so-called text-generators are entities invented and built by humans; furthermore, it is humans who use them, and the essential material used by text-generators are furthermore texts, which are considered human cultural property, created and used by humans for a variety of discourse types, here restricted to scientific discourse.
In the case of text generators, let us first note that we are dealing with machines that have input and output, a minimal learning capability, and whose input and output can process text-like objects. Insofar as text generators can process text-like objects as input and process them again as output, an exchange of texts between humans and text generators can take place in principle.
At the current state of development (September 2023), text generators do not yet have an independent real-world perception within the scope of their input, and the entire text generator system does not yet have such processes as those that enable species-typical cognitions in humans. Furthermore, a text generator does not yet have a meaning function as it is given with humans.
From this fact it follows automatically that text generators cannot decide about selective or structural correctness/not correctness in the case of statements of a text. In general, they do not have their own assignment of meaning as with humans. Texts generated by text generators only have a meaning if a human as a recipient automatically assigns a meaning to a text due to his species-typical meaning relation, because this is the learned behavior of a human. In fact, the text generator itself has never assigned any meaning to the generated text. Salopp one could also formulate that a technical text generator works like a parasite: it collects texts that humans have generated, rearranges them combinatorially according to formal criteria for the output, and for the receiving human a meaning event is automatically triggered by the text in the human, which does not exist anywhere in the text generator. Whether this very restricted form of text generation is now in any sense detrimental or advantageous for the type of scientific discourse (with texts), that is to be examined in the further course.
SCIENTIFIC DISCOURSE
There is no clear definition for the term scientific discourse. This is not surprising, since an unambiguous definition presupposes that there is a fully specified conceptual framework within which terms such as discourse and scientific can be clearly delimited. However, in the case of a scientific enterprise with a global reach, broken down into countless individual disciplines, this does not seem to be the case at present (Sept 2023). For the further procedure, we will therefore fall back on core ideas of the discussion in philosophy of science since the 20th century [4]and we will introduce working hypotheses on the concept of empirical theory as well as sustainable empirical theory, so that a working hypothesis on the concept of scientific discourse will be possible, which has a minimal sharpness.
[4] A good place to start may be: F. Suppe, Editor. The Structure of Scientific Theories. University of Illinois Press, Urbana, 2 edition, 1979.
EMPIRICAL THEORY
The following assumptions are made for the notion of an empirical theory:
an empirical theory is basically a text, written in a language that all participants understand.
one part of the theory contains a description of an initial situation, the statements of which can be qualified by the theory users as now matching (true) or now not matching (false).
another part of the theory contains a text that lists all changes that, to the knowledge of the participants, occur in the context of the initial situation and can change parts of the initial situation.
changes in the initial situation are expressed by replacing certain statements of the initial situation with other statements. The resulting new text replaces the previous text.
through the possibility of generating new initial situations, there is the possibility of making predictions (forming expectations) by applying rules of change to an applicable initial situation several times (at least once) in succession. The texts that are generated in each case induce in the minds of the participants, on the basis of the available meaning functions, the idea of a situation that is demanded to become qualified as now matching with intersubjective reality, should it occur. In the case of occurrence, the situation must correspond to the conception in the mind via perception. Whether such a correspondence can be established, after how long time and to what extent, this is basically open in the presence of the prediction making (the formation of an expectation) (autonomy of the object!).
SCIENTIFIC DISCOURSE NEW
Even before the introduction of the notion of a sustainable empirical theory, the notion of an empirical theory just introduced allows for initial statements about what a scientific discourse could or should be. It is assumed here that the participants in a scientific discourse exchange their opinions about the world in the form of texts, all of which meet the requirements of an empirical theory.
WITH HUMANS
Insofar as the participants in the discourse are human beings, we know that they have basic species-typical abilities to generate subjective meanings by means of language, which can be decided as now true or now not true by reference to a real situation.
WITH TEXT GENERATORS
Insofar as the discourse participants would be text generators lacking these elementary skills (meaning, reality reference), they drop out as independent actors for a scientific discourse. To illustrate this, here is a self-assessment of chatGPT4, a current text generator:
ChatGPT
Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:
I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
Responses I give can vary slightly even if the same question is posed multiple times.
I can’t directly attribute my statements to specific, individual sources.
The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.
You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know. [5]
[5] This text is from a chat experiment conducted by the author with chatGPT4 on August 31, 2023. The citation can be found in: Gerd Doeben-Henisch, CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science, uffmm, Integrating Engineering and the Human Factor, eJournal uffmm.org ISSN 2567-6458, 8/31/2023 in [https://www.uffmm.org/2023/08/31/continue-experiment-with-chatgpt4-science-empirical-science-philosophy-of-science/ ] (accessed 9/27/2023).
The question then arises whether (current) text generators, despite their severely limited capabilities, could nevertheless contribute to scientific discourse, and what this contribution means for human participants. Since text generators fail for the hard scientific criteria (decidable reality reference, reproducible predictive behavior, separation of sources), one can only assume a possible contribution within human behavior: since humans can understand and empirically verify texts, they would in principle be able to rudimentarily classify a text from a text generator within their considerations.
For hard theory work, these texts would not be usable, but due to their literary-associative character across a very large body of texts, the texts of text generators could – in the positive case – at least introduce thoughts into the discourse through texts as stimulators via the detour of human understanding, which would stimulate the human user to examine these additional aspects to see if they might be important for the actual theory building after all. In this way, the text generators would not participate independently in the scientific discourse, but they would indirectly support the knowledge process of the human actors as aids to them.[6]
[6] A detailed illustration of this associative role of a text generator can also be found in (Doeben-Henisch, 2023) on the example of the term philosophy of science and on the question of the role of philosophy of science.
CHALLENGE DECISION
The application of an empirical theory can – in the positive case — enable an expanded picture of everyday experience, in that, related to an initial situation, possible continuations (possible futures) are brought before one’s eyes. For people who have to shape their own individual processes in their respective everyday life, however, it is usually not enough to know only what one can do.Rather, everyday life requires deciding in each case which continuation to choose, given the many possible continuations. In order to be able to assert themselves in everyday life with as little effort as possible and with – at least imagined – as little risk as possible, people have adopted well-rehearsed behavior patterns for as many everyday situations as possible, which they follow spontaneously without questioning them anew each time. These well-rehearsed behavior patterns include decisions that have been made. Nevertheless, there are always situations in which the ingrained automatisms have to be interrupted in order to consciously clarify the question for which of several possibilities one wants to decide.
The example of an individual decision-maker can also be directly applied to the behavior of larger groups. Normally, even more individual factors play a role here, all of which have to be integrated in order to reach a decision. However, the characteristic feature of a decision situation remains the same: whatever knowledge one may have at the time of decision, when alternatives are available, one has to decide for one of many alternatives without any further, additional knowledge at this point. Empirical science cannot help here [7]: it is an indisputable basic ability of humans to be able to decide.
So far, however, it remains rather hidden in the darkness of not knowing oneself, which ultimately leads to deciding for one and not for the other. Whether and to what extent the various cultural patterns of decision-making aids in the form of religious, moral, ethical or similar formats actually form or have formed a helpful role for projecting a successful future appears to be more unclear than ever.[8]
[7] No matter how much detail she can contribute about the nature of decision-making processes.
[8] This topic is taken up again in the following in a different context and embedded there in a different solution context.
SUSTAINABLE EMPIRICAL THEORY
Through the newly flared up discussion about sustainability in the context of the United Nations, the question of prioritizing action relevant to survival has received a specific global impulse. The multitude of aspects that arise in this discourse context [9] are difficult, if not impossible, to classify into an overarching, consistent conceptual framework.
[9] For an example see the 17 development goals: [https://unric.org/de/17ziele/] (Accessed: September 27, 2023)
A rough classification of development goals into resource-oriented and actor-oriented can help to make an underlying asymmetry visible: a resource problem only exists if there are biological systems on this planet that require a certain configuration of resources (an ecosystem) for their physical existence. Since the physical resources that can be found on planet Earth are quantitatively limited, it is possible, in principle, to determine through thought and science under what conditions the available physical resources — given a prevailing behavior — are insufficient. Added to this is the factor that biological systems, by their very existence, also actively alter the resources that can be found.
So, if there should be a resource problem, it is exclusively because the behavior of the biological systems has led to such a biologically caused shortage. Resources as such are neither too much, nor too little, nor good, nor bad. If one accepts that the behavior of biological systems in the case of the species Homo sapiens can be controlled by internal states, then the resource problem is primarily a cognitive and emotional problem: Do we know enough? Do we want the right thing? And these questions point to motivations beyond what is currently knowable. Is there a dark spot in the human self-image here?
On the one hand, this questioning refers to the driving forces for a current decision beyond the possibilities of the empirical sciences (trans-empirical, meta-physical, …), but on the other hand, this questioning also refers to the center/ core of human competence. This motivates to extend the notion of empirical theory to the notion of a sustainable empirical theory. This does not automatically solve the question of the inner mechanism of a value decision, but it systematically classifies the problem. The problem thus has an official place. The following formulation is suggested as a characterization for the concept of a sustainable empirical theory:
a sustainable empirical theory contains an empirical theory as its core.
besides the parts of initial situation, rules of change and application of rules of change, a sustainable theory also contains a text with a list of such situations, which are considered desirable for a possible future (goals, visions, …).
under the condition of goals, it is possible to minimally compare each current situation with the available goals and thereby indicate the degree of goal achievement.
Stating desired goals says nothing about how realistic or promising it is to pursue those goals. It only expresses that the authors of this theory know these goals and consider them optimal at the time of theory creation. [10] The irrationality of chosen goals is in this way officially included in the domain of thought of the theory creators and in this way facilitates the extension of the rational to the irrational without already having a real solution. Nobody can exclude that the phenomenon of bringing forth something new, respectively of preferring a certain point of view in comparison to others, can be understood further and better in the future.
[10] Something can only be classified as optimal if it can be placed within an overarching framework, which allows for positioning on a scale. This refers to a minimal cognitive model as an expression of rationality. However, the decision itself takes place outside of such a rational model; in this sense, the decision as an independent process is pre-rational.
EXTENDED SCIENTIFIC DISCOURSE
If one accepts the concept of a sustainable empirical theory, then one can extend the concept of a scientific discourse in such a way that not only texts that represent empirical theories can be introduced, but also those texts that represent sustainable empirical theories with their own goals. Here too, one can ask whether the current text generators (September 2023) can make a constructive contribution. Insofar as a sustainable empirical theory contains an empirical theory as a hard core, the preceding observations on the limitations of text generators apply. In the creative part of the development of an empirical theory, they can contribute text fragments through their associative-combinatorial character based on a very large number of documents, which may inspire the active human theory authors to expand their view. But what about that part that manifests itself in the selection of possible goals? At this point, one must realize that it is not about any formulations, but about those that represent possible solution formulations within a systematic framework; this implies knowledge of relevant and verifiable meaning structures that could be taken into account in the context of symbolic patterns. Text generators fundamentally do not have these abilities. But it is – again – not to be excluded that their associative-combinatorial character based on a very large number of documents can still provide one or the other suggestion.
In retrospect of humanity’s history of knowledge, research, and technology, it is suggested that the great advances were each triggered by something really new, that is, by something that had never existed before in this form. The praise for Big Data, as often heard today, represents – colloquially speaking — exactly the opposite: The burial of the new by cementing the old.[11]
[11] A prominent example of the naive fixation on the old as a standard for what is right can be seen, for example, in the book by Seth Stephens-Davidowitz, Don’t Trust Your Gut. Using Data Instead of Instinct To Make Better Choices, London – Oxford New York et al., 2022.
EXISTENTIALLY NEW THROUGH TIME
The concept of an empirical theory inherently contains the element of change, and even in the extended concept of a sustainable empirical theory, in addition to the fundamental concept of change, there is the aspect of a possible goal. A possible goal itself is not a change, but presupposes the reality of changes! The concept of change does not result from any objects but is the result of a brain performance, through which a current present is transformed into a partially memorable state (memory contents) by forming time slices in the context of perception processes – largely unconsciously. These produced memory contents have different abstract structures, are networked differently with each other, and are assessed in different ways. In addition, the brain automatically compares current perceptions with such stored contents and immediately reports when a current perception has changed compared to the last perception contents. In this way, the phenomenon of change is a fundamental cognitive achievement of the brain, which thus makes the character of a feeling of time available in the form of a fundamental process structure. The weight of this property in the context of evolution is hardly to be overestimated, as time as such is in no way perceptible.
[12] The modern invention of machines that can generate periodic signals (oscillators, clocks) has been successfully integrated into people’s everyday lives. However, the artificially (technically) producible time has nothing to do with the fundamental change found in reality. Technical time is a tool that we humans have invented to somehow structure the otherwise amorphous mass of a phenomenon stream. Since structure itself shows in the amorphous mass, which manifest obviously for all, repeating change cycles (e.g., sunrise and sunset, moonrise and moonset, seasons, …), a correlation of technical time models and natural time phenomena was offered. From the correlations resulting here, however, one should not conclude that the amorphous mass of the world phenomenon stream actually behaves according to our technical time model. Einstein’s theory of relativity at least makes us aware that there can be various — or only one? — asymmetries between technical time and world phenomenon stream.
Assuming this fundamental sense of time in humans, one can in principle recognize whether a current phenomenon, compared to all preceding phenomena, is somehow similar or markedly different, and in this sense indicates something qualitatively new.[13]
[13] Ultimately, an individual human only has its individual memory contents available for comparison, while a collective of people can in principle consult the set of all records. However, as is known, only a minimal fraction of the experiential reality is symbolically transformed.
By presupposing the concept of directed time for the designation of qualitatively new things, such a new event is assigned an information value in the Shannonian sense, as well as the phenomenon itself in terms of linguistic meaning, and possibly also in the cognitive area: relative to a spanned knowledge space, the occurrence of a qualitatively new event can significantly strengthen a theoretical assumption. In the latter case, the cognitive relevance may possibly mutate to a sustainable relevance if the assumption marks a real action option that could be important for further progress. In the latter case, this would provoke the necessity of a decision: should we adopt this action option or not? Humans can accomplish the finding of qualitatively new things. They are designed for it by evolution. But what about text generators?
Text generators so far do not have a sense of time comparable to that of humans. Their starting point would be texts that are different, in such a way that there is at least one text that is the most recent on the timeline and describes real events in the real world of phenomena. Since a text generator (as of September 2023) does not yet have the ability to classify texts regarding their applicability/non-applicability in the real world, its use would normally end here. Assuming that there are people who manually perform this classification for a text generator [14] (which would greatly limit the number of possible texts), then a text generator could search the surface of these texts for similar patterns and, relative to them, for those that cannot be compared. Assuming that the text generator would find a set of non-comparable patterns in acceptable time despite a massive combinatorial explosion, the problem of semantic qualification would arise again: which of these patterns can be classified as an indication of something qualitatively new? Again, humans would have to become active.
[14] Such support of machines by humans in the field of so-called intelligent algorithms has often been applied (and is still being applied today, see: [https://www.mturk.com/] (Accessed: September 27, 2023)), and is known to be very prone to errors.
As before, the verdict is mixed: left to itself, a text generator will not be able to solve this task, but in cooperation with humans, it may possibly provide important auxiliary services, which could ultimately be of existential importance to humans in search of something qualitatively new despite all limitations.
THE IMMANENT PREJUDICE OF THE SCIENCES
A prejudice is known to be the assessment of a situation as an instance of a certain pattern, which the judge assumes applies, even though there are numerous indications that the assumed applicability is empirically false. Due to the permanent requirement of everyday life that we have to make decisions, humans, through their evolutionary development, have the fundamental ability to make many of their everyday decisions largely automatically. This offers many advantages, but can also lead to conflicts.
Daniel Kahneman introduced in this context in his book [15] the two terms System 1 and System 2 for a human actor. These terms describe in his concept of a human actor two behavioral complexes that can be distinguished based on some properties.[16] System 1 is set by the overall system of human actor and is characterized by the fact that the actor can respond largely automatically to requirements by everyday life. The human actor has automatic answers to certain stimuli from his environment, without having to think much about it. In case of conflicts within System 1 or from the perspective of System 2, which exercises some control over the appropriateness of System 1 reactions in a certain situation in conscious mode, System 2 becomes active. This does not have automatic answers ready, but has to laboriously work out an answer to a given situation step by step. However, there is also the phenomenon that complex processes, which must be carried out frequently, can be automated to a certain extent (bicycling, swimming, playing a musical instrument, learning language, doing mental arithmetic, …). All these processes are based on preceding decisions that encompass different forms of preferences. As long as these automated processes are appropriate in the light of a certain rational model, everything seems to be OK. But if the corresponding model is distorted in any sense, then it would be said that these models carry a prejudice.
[15] Daniel Kahnemann, Thinking Fast and Slow, Pinguin Boooks Random House, UK, 2012 (zuerst 2011)
[16] See the following Chapter 1 in Part 1 of (Kahnemann, 2012, pages 19-30).
In addition to the countless examples that Kahneman himself cites in his book to show the susceptibility of System 1 to such prejudices, it should be pointed out here that the model of Kahneman himself (and many similar models) can carry a prejudice that is of a considerably more fundamental nature. The division of the behavioral space of a human actor into a System 1 and 2, as Kahneman does, obviously has great potential to classify many everyday events. But what about all the everyday phenomena that fit neither the scheme of System 1 nor the scheme of System 2?
In the case of making a decision, System 1 comments that people – if available – automatically call up and execute an available answer. Only in the case of conflict under the control of System 2 can there be lengthy operations that lead to other, new answers.
In the case of decisions, however, it is not just about reacting at all, but there is also the problem of choosing between known possibilities or even finding something new because the known old is unsatisfactory.
Established scientific disciplines have their specific perspectives and methods that define areas of everyday life as a subject area. Phenomena that do not fit into this predefined space do not occur for the relevant discipline – methodically conditioned. In the area of decision-making and thus the typical human structures, there are not a few areas that have so far not found official entry into a scientific discipline. At a certain point in time, there are ultimately many, large phenomenon areas that really exist, but methodically are not present in the view of individual sciences. For a scientific investigation of the real world, this means that the sciences, due to their immanent exclusions, are burdened with a massive reservation against the empirical world. For the task of selecting suitable sustainable goals within the framework of sustainable science, this structurally conditioned fact can be fatal. Loosely formulated: under the banner of actual science, a central principle of science – the openness to all phenomena – is simply excluded, so as not to have to change the existing structure.
For this question of a meta-reflection on science itself, text generators are again only reduced to possible abstract text delivery services under the direction of humans.
SUPERVISION BY PHILOSOPHY
The just-described fatal dilemma of all modern sciences is to be taken seriously, as without an efficient science, sustainable reflection on the present and future cannot be realized in the long term. If one agrees that the fatal bias of science is caused by the fact that each discipline works intensively within its discipline boundaries, but does not systematically organize communication and reflection beyond its own boundaries with a view to other disciplines as meta-reflection, the question must be answered whether and how this deficit can be overcome.
There is only one known answer to this question: one must search for that conceptual framework within which these guiding concepts can meaningfully interact both in their own right and in their interaction with other guiding concepts, starting from those guiding concepts that are constitutive for the individual disciplines.
This is genuinely the task of philosophy, concretized by the example of the philosophy of science. However, this would mean that each individual science would have to use a sufficiently large part of its capacities to make the idea of the one science in maximum diversity available in a real process.
For the hard conceptual work hinted at here, text generators will hardly be able to play a central role.
COLLECTIVE INTELLIGENCE
Since so far there is no concept of intelligence in any individual science that goes beyond a single discipline, it makes little sense at first glance to apply the term intelligence to collectives. However, looking at the cultural achievements of humanity as a whole, and here not least with a view to the used language, it is undeniable that a description of the performance of an individual person, its individual performance, is incomplete without reference to the whole.
So, if one tries to assign an overarching meaning to the letter combination intelligence, one will not be able to avoid deciphering this phenomenon of the human collective in the form of complex everyday processes in a no less complex dynamic world, at least to the extent that one can identify a somewhat corresponding empirical something for the letter combination intelligence, with which one could constitute a comprehensible meaning.
Of course, this term should be scalable for all biological systems, and one would have to have a comprehensible procedure that allows the various technical systems to be related to this collective intelligence term in such a way that direct performance comparisons between biological and technical systems would be possible.[17]
[17] The often quoted and popular Turing Test (See: Alan M. Turing: Computing Machinery and Intelligence. In: Mind. Volume LIX, No. 236, 1950, 433–460, [doi:10.1093/mind/LIX.236.433] (Accessed: Sept 29, 2023) in no way meets the methodological requirements that one would have to adhere to if one actually wanted to come to a qualified performance comparison between humans and machines. Nevertheless, the basic idea of Turing in his meta-logical text from 1936, published in 1937 (see: A. M. Turing: On Computable Numbers, with an Application to the Entscheidungsproblem. In: Proceedings of the London Mathematical Society. s2-42. Volume, No. 1, 1937, 230–265 [doi:10.1112/plms/s2-42.1.230] (Accessed: Sept 29, 2023) seems to be a promising starting point, since he, in trying to present an alternative formulation to Kurt Gödel’s (1931) proof on the undecidability of arithmetic, leads a meta-logical proof, and in this context Turing introduces the concept of a machine that was later called Universal Turing Machine.
Already in this proof approach, it can be seen how Turing transforms the phenomenon of a human bookkeeper at a meta-level into a theoretical concept, by means of which he can then meta-logically examine the behavior of this bookkeeper in a specific behavioral space. His meta-logical proof not only confirmed Gödel’s meta-logical proof, but also indirectly indicates how ultimately any phenomenal complexes can be formalized on a meta-level in such a way that one can then argue formally demanding with it.
CONCLUSION STRUCTURALLY
The idea of philosophical supervision of the individual sciences with the goal of a concrete integration of all disciplines into an overall conceptual structure seems to be fundamentally possible from a philosophy of science perspective based on the previous considerations. From today’s point of view, specific phenomena claimed by individual disciplines should no longer be a fundamental obstacle for a modern theory concept. This would clarify the basics of the concept of Collective Intelligence and it would surely be possible to more clearly identify interactions between human collective intelligence and interactive machines. Subsequently, the probability would increase that the supporting machines could be further optimized, so that they could also help in more demanding tasks.
CONCLUSION SUBJECTIVELY
Attempting to characterize the interactive role of text generators in a human-driven scientific discourse, assuming a certain scientific model, appears to be somewhat clear from a transdisciplinary (and thus structural) perspective. However, such scientific discourse represents only a sub-space of the general human discourse space. In the latter, the reception of texts from the perspective of humans inevitably also has a subjective view [18]: People are used to suspecting a human author behind a text. With the appearance of technical aids, texts have increasingly become products, which increasingly gaining formulations that are not written down by a human author alone, but by the technical aids themselves, mediated by a human author. With the appearance of text generators, the proportion of technically generated formulations increases extremely, up to the case that ultimately the entire text is the direct output of a technical aid. It becomes difficult to impossible to recognize to what extent a controlling human share can still be spoken of here. The human author thus disappears behind a text; the sign reality which does not prevent an existential projection of the inner world of the human reader into a potential human author, but threatens to lose itself or actually loses itself in the real absence of a human author in the face of a chimeric human counterpart. What happens in a world where people no longer have human counterparts?
[18] There is an excellent analysis on this topic by Hannes Bajohr titled “Artifizielle und postartifizielle Texte. Über Literatur und Künstliche Intelligenz” (Artificial and Post-Artificial Texts: On Literature and Artificial Intelligence). It was the Walter-Höllerer-Lecture 2022, delivered on December 8, 2022, at the Technical University of Berlin. The lecture can be accessed here [ https://hannesbajohr.de/wp-content/uploads/2022/12/Hoellerer-Vorlesung-2022.pdf ] (Accessed: September 29, 2023). The reference to this lecture was provided to me by Jennifer Becker.
The following text is a confluence of ideas that have been driving me for many months. Parts of it can be found as texts in all three blogs (Citizen Science 2.0 for Sustainable Development, Integrated Engineering and the Human Factor (this blog), Philosophy Now. In Search for a new Human Paradigm). The choice of the word ‘grammar’ [1] for the following text is rather unusual, but seems to me to reflect the character of the reflections well.
Sustainability for populations
The concept of sustainable development is considered here in the context of ‘biological populations’. Such populations are dynamic entities with many ‘complex properties’. For the analysis of the ‘sustainability’ of such populations, there is one aspect that seems ‘fundamental’ for a proper understanding. It is the aspect whether and how the members of a population – the actors – are interconnected or not.
An ‘unconnected’ set
If I have ‘actors’ of a ‘population’, which are in no direct ‘interaction’ with each other, then also the ‘acting’ of these actors is isolated from each other. In a wide area they probably do not ‘get in each other’s way’; in a narrow area they could easily hinder each other or even fight each other, up to mutual destruction.
It should be noted that even such disconnected actors must have minimal ‘knowledge’ about themselves and the environment, also minimal ’emotions’, in order to live at all.
Without direct interaction, an unconnected population will nevertheless die out relatively quickly as a population.
A ‘connected’ set
A ‘connected set’ exists if the actors of a population have a sufficient number of direct interactions through which they could ‘coordinate’ their knowledge about themselves and the world, as well as their emotions, to such an extent that they are capable of ‘coordinated action’. Thereby the single, individual actions become related to their possible effect to a ‘common (= social) action’ which can effect more than each of them would have been able to do individually.
The ’emotions’ involved must rather be such that they do not so much ‘delimit/exclude’, but rather ‘include/recognize’.
The ‘knowledge’ involved must be rather that it is not ‘static’ and not ‘unrealistic’, but rather ‘open’, ‘learning’ and ‘realistic’.
The ‘survival’ of a connected population is basically possible if the most important ‘factors’ of a survival are sufficiently fulfilled.
Transitions from – to
The ‘transition’ from an ‘unconnected’ to a ‘connected’ state of a population is not inevitable. The primary motive may simply be the ‘will to survive’ (an emotion), and the growing ‘insight’ (= knowledge) that this is only possible with ‘minimal cooperation’. An individual, however, can live in a state of ‘loner’ for the duration of his life, because he does not have to experience his individual death as a sufficient reason to ally with others. A population as such, however, can only survive if a sufficient number of individuals survive, interacting minimally with each other. The history of life on planet Earth suggests the working hypothesis that for 3.5 billion years there have always been sufficient members of a population in biological populations (including the human population) to counter the ‘self-destructive tendencies’ of individuals with a ‘constructive tendency’.
The emergence and the maintenance of a ‘connected population’ needs a minimum of ‘suitable knowledge’ and ‘suitable emotions’ to succeed.
It is a permanent challenge for all biological populations to shape their own emotions in such a way that they tend not to exclude, to despise, but rather to include and to recognize. Similarly, knowledge must be suitable for acquiring a realistic picture of oneself, others, and the environment so that the behavior in question is ‘factually appropriate’ and tends to be more likely to lead to ‘success’.
As the history of the human population shows, both the ‘shaping of emotions’ and the ‘shaping of powerful knowledge’ are usually largely underestimated and poorly or not at all organized. The necessary ‘effort’ is shied away from, one underestimates the necessary ‘duration’ of such processes. Within knowledge there is additionally the general problem that the ‘short time spans’ within an individual life are an obstacle to recognize and form such processes where larger time spans require it (this concerns almost all ‘important’ processes).
We must also note that ‘connected states’ of populations can also collapse again at any time, if those behaviors that make them possible are weakened or disappear altogether. Connections in the realm of biological populations are largely ‘undetermined’! They are based on complex processes within and between the individual actors. Whole societies can ‘topple overnight’ if an event destroys ‘trust in context’. Without trust no context is possible. The emergence and the passing away of trust should be part of the basic concern of every society in a state of interconnectedness.
Political rules of the game
‘Politics’ encompasses the totality of arrangements that members of a human population agree to organize jointly binding decision-making processes.[2] On a rough scale, one could place two extremes: (i) On the one hand, a population with a ‘democratic system’ [3] and a population with a maximally un-democratic system.[4]
As already noted in general for ‘connected systems’: the success of democratic systems is in no way determinate. Enabling and sustaining it requires the total commitment of all participants ‘by their own conviction’.
Basic reality ‘corporeality’
Biological populations are fundamentally characterized by a ‘corporeality’ which is determined through and through by ‘regularities’ of the known material structures. In their ‘complex formations’ biological systems manifest also ‘complex properties’, which cannot be derived simply from their ‘individual parts’, but the respective identifiable ‘material components’ of their ‘body’ together with many ‘functional connections’ are fundamentally subject to a multiplicity of ‘laws’ which are ‘given’. To ‘change’ these is – if at all – only possible under certain limited conditions.
All biological actors consist of ‘biological cells’ which are the same for all. In this, human actors are part of the total development of (biological) life on planet Earth. The totality of (biological) life is also called ‘biome’ and the total habitat of a biome is also called ‘biosphere’. [5] The population of homo sapiens is only a vanishingly small part of the biome, but with the homo sapiens typical way of life it claims ever larger parts of the biosphere for itself at the expense of all other life forms.
(Biological) life has been taking place on planet Earth for about 3.5 billion years.[6] Earth, as part of the solar system [7], has had a very eventful history and shows strong dynamics until today, which can and does have a direct impact on the living conditions of biological life (continental plate displacement, earthquakes, volcanic eruptions, magnetic field displacement, ocean currents, climate, …).
Biological systems generally require a continuous intake of material substances (with energy potentials) to enable their own metabolic processes. They also excrete substances. Human populations need certain amounts of ‘food’, ‘water’, ‘dwellings’, ‘storage facilities’, ‘means of transport’, ‘energy’, … ‘raw materials’, … ‘production processes’, ‘exchange processes’ … As the sheer size of a population grows, the material quantities required (and also wastes) multiply to orders of magnitude that can destroy the functioning of the biosphere.
Predictive knowledge
If a coherent population does not want to leave possible future states to pure chance, then it needs a ‘knowledge’ which is suitable to construct ‘predictions’ (‘prognoses’) for a possible future (or even many ‘variants of future’) from the knowledge about the present and about the past.
In the history of homo sapiens so far, there is only one form of knowledge that has been demonstrably demonstrated to be suitable for resilient sustainable forecasts: the knowledge form of empirical sciences. [8] This form of knowledge is so far not perfect, but a better alternative is actually not known. At its core, ’empirical knowledge’ comprises the following elements: (i) A description of a baseline situation that is assumed to be ’empirically true’; (ii) A set of ‘descriptions of change processes’ that one has been able to formulate over time, and from which one knows that it is ‘highly probable’ that the described changes will occur again and again under known conditions; (iii) An ‘inference concept’ that describes how to apply to the description of a ‘given current situation’ the known descriptions of change processes in such a way that one can modify the description of the current situation to produce a ‘modified description’ that describes a new situation that can be considered a ‘highly probable continuation’ of the current situation in the future. [9]
The just sketched ‘basic idea’ of an empirical theory with predictive ability can be realized concretely in many ways. To investigate and describe this is the task of ‘philosophy of science’. However, the vagueness found in dealing with the notion of an ’empirical theory’ is also found in the understanding of what is meant by ‘philosophy of science.'[9]
In the present text, the view is taken that the ‘basic concept’ of an empirical theory can be fully realized in normal everyday action using everyday language. This concept of a ‘General Empirical Theory’ can be extended by any special languages, methods and sub-theories as needed. In this way, the hitherto unsolved problem of the many different individual empirical disciplines could be solved almost by itself.[10]
Sustainable knowledge
In the normal case, an empirical theory can, at best, generate forecasts that can be said to have a certain empirically based probability. In ‘complex situations’ such a prognosis can comprise many ‘variants’: A, B, …, Z. Now which of these variants is ‘better’ or ‘worse’ in the light of an ‘assumable criterion’ cannot be determined by an empirical theory itself. Here the ‘producers’ and the ‘users’ of the theory are asked: Do they have any ‘preferences’ why e.g. variant ‘B’ should be preferred to variant ‘C”: “Bicycle, subway, car or plane?” , “Genetic engineering or not?”, “Pesticides or not?”, “Nuclear energy or not?”, “Uncontrolled fishing or not?” …
The ‘evaluation criteria’ to be applied actually themselves require ‘explicit knowledge’ for the estimation of a possible ‘benefit’ on the one hand, on the other hand the concept of ‘benefit’ is anchored in the feeling and wanting of human actors: Why exactly do I want something? Why does something ‘feel good’? …
Current discussions worldwide show that the arsenal of ‘evaluation criteria’ and their implementation offer anything but a clear picture.
COMMENTS
[1] For the typical use of the term ‘grammar’ see the English Wikipedia: https://en.wikipedia.org/wiki/Grammar. In the text here in the blog I transfer this concept of ‘language’ to that ‘complex process’ in which the population of the life form ‘homo sapiens’ tries to achieve an ‘overall state’ on planet earth that allows a ‘maximally good future’ for as much ‘life’ as possible (with humans as a sub-population). A ‘grammar of sustainability’ presupposes a certain set of basic conditions, factors, which ‘interact’ with each other in a dynamic process, in order to realize as many states as possible in a ‘sequence of states’, which enable as good a life as possible for as many as possible.
[2] For the typical usage of the term politics, see the English Wikipedia: https://en.wikipedia.org/wiki/Politics . This meaning is also assumed in the present text here.
[4] Of course, one could also choose completely different basic concepts for a scale. However, the concept of a ‘democratic system’ (with all its weaknesses) seems to me to be the ‘most suitable’ system in the light of the requirements for sustainable development; at the same time, however, it makes the highest demands of all systems on all those involved. That it came to the formation of ‘democracy-like’ systems at all in the course of history, actually borders almost on a miracle. The further development of such democracy-like systems fluctuates constantly between preservation and decay. Positively, one could say that the constant struggle for preservation is a kind of ‘training’ to enable sustainable development.
[5] For typical uses of the terms ‘biome’ and ‘biosphere’, see the corresponding entries in the English Wikipedia: ‘biome’: https://en.wikipedia.org/wiki/Biome, ‘biosphere’: https://en.wikipedia.org/wiki/Biosphere
[6] Some basic data for planet Earth: https://en.wikipedia.org/wiki/Earth
[7] Some basic data for the solar system: https://en.wikipedia.org/wiki/Solar_System
[8] If you will search for he term ‘Empirical Science’ you ill be disappointed, because the English Wikipedia (as well as the German Version) does not provide such a term. You have either to accept the term ‘Science’ ( https://en.wikipedia.org/wiki/Science ) or the term ‘Empiricism’ (https://en.wikipedia.org/wiki/Empiricism), but both do not cover the general properties of an Empirical theory.
[9] If you have a clock with hour and minute hands, which currently shows 11:04h, and you know from everyday experience that the minute hand advances by one stroke every minute, then you can conclude with a fairly high probability that the minute hand will advance by one stroke ‘very soon’. The initial description ‘The clock shows 11:04h’ would then be changed to that of the new description ‘The clock shows 11:05h’. Before the ’11:05h event’ the statement ‘The clock shows 11:05h’ would have the status of a ‘forecast’.
[10] A single discipline (physics, chemistry, biology, psychology, …) cannot conceptually grasp ‘the whole’ ‘out of itself’; it does not have to. The various attempts to ‘reduce’ any single discipline to another (physics is especially popular here) have all failed so far. Without a suitable ‘meta-theory’ no single discipline can free itself from its specialization. The concept of a ‘General Empirical Theory’ is such a meta-theory. Such a meta-theory fits into the concept of a modern philosophical thinking.
The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.
INTRODUCTION
In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].
With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.
The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.
CONTENT
Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.“
Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
!!! From here all the following chapters have to be re-written !!!
Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */
[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂
[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]
[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.
[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.
[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.
[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.
[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).
[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.
[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science
[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.
Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism
[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method
Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]
and
Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]”
[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/
[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic
[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)
[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory
[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique
[10] = [5]
[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein
[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */
[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act
[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …
[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.
[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra
[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].
[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.
[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.
[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.
[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022
[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)
[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069
[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability
[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London
[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342
[30] Döben-Henisch, G., Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263
[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.
[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386
[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club
[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)
[] UN. Secretary-General; World Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )
/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */
[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).
[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8
[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.
[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4
[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341
[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence
[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]
[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973
[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2
[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings
[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.
[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).
[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)
[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)
[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)
[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.
[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017
[] Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */
[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers37, 335-425 (1993).
[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530
[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)
[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)
[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York
[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.
[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice. 9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘
[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).
[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl, Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828
[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681
[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543
[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422
[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.
[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a
[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007
[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006
[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011
[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105
[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0
[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691
[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094
[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence
[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285
[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147
[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004
[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663
[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14
[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568
[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273
[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3
[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5
[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80
[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15 DOI 10.1007/s40309-013-0015-4
[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30
[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032
[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427
[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331
[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x
[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77
[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53
[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569
[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology
[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313
[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099
[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages
[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#
Comment by Gerd Doeben-Henisch:
[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.
[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821
[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670
[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000
Abstract: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1 You can read in the abstract: “A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap- proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench- mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”
In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress
In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”
[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)
In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”
[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600
In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, AI FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate po- tential shortcomings in order to fix bugs or improve the system’s performance. We propose a documentation artifact that aims to provide such guidance in a prescriptive way. Our proposal, called Method Cards, aims to increase the transparency and reproducibil- ity of ML systems by allowing stakeholders to reproduce the models, understand the rationale behind their designs, and introduce adap- tations in an informed way. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations that help increase the transparency and reproducibility of the detection model. We fur- ther highlight avenues for improving the user experience of ML engineers based on Method Cards.”
[] John H. Miller, (2022), Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429
In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”
In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
POPPERs POSITION IN THE CHAPTERS 1-17
In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.
Scientific Theory
A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference ⊢, which allows the deduction of a certain theorem t if one makes some additional concrete assumptions H.
Example: Theory T1 = <AX1,⊢>
AX1= {Birds can fly}
H1= {Peter is a bird}
⊢: Peter can fly
Because there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.
One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completelytrue. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)
The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:
AX1= {Birds can fly}
H2= {Doxy is a bird, Doxy cannot fly}
⊢: ‘Doxy can fly’ & ~’Doxy can fly’
If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
In this case the set {AX1, H2} is called inconsistent.
If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.
Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof a falsification of a theory T can destroy the ability of this theory to distinguish between true and false statements.
Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try it.(cf. p.33, 63)
Meta Theory, Logic of Scientific Discovery, Philosophy of Science
Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.
Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person like Popper.
Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:
A scientific theory from a point of view of Philosophy of Science represents a structure like the following one (minimal version):
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L. Based on their built-in adaptive meaning function μ the human actors A can relate properties of the situation S with expressions E of L. Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions E-. Both sets of expressions are true subsets of E: E+ ⊂ E and E- ⊂ E. Additionally the experts can define some special set of expressions called axioms AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions that this unified set confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions can not distinguish between true and false expressions.
Popper gives some general requirements for the axioms of a theory (cf. p.71):
Axioms must be free from contradiction.
The axioms must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.
While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.
Empirical Interpretation(s)
Popper assumes that the universal statements of scientific theories are linguistic representations, and this means they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning. Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true), or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).
Examples:
Two human actors in a situation S agree about the fact, that there is ‘something’ which they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one). To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’. If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
The expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’ they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations
The basic idea with a system of axioms AX is — according to Popper — that the axioms as universal expressions represent a system of equations where the general terms should be able to be substituted by certain values. The set of admissible values is different from the set of inadmissible values. The relation between those values which can be substituted for the terms is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)
But Popper has difficulties with an axiomatic system interpreted as a system of equations since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is, that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)
On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)
But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.
Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.
As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!
Example:
Lets us have a look to another example.
If we take the system of axioms AX as the following schema: AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m as follows: m={<1,2,3>, <2,3,5>}. The values are again given as a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers which can satisfy the general concept of the equation. In this secondary interpretation m is becoming a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.
It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing objects; the individual contributions will be reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derivedforecast of the theory 1+2 = 3 and the empirically measured value 1+2 = 2. If one would define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …
Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.
SOURCES
[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by Basic Books, New York (more editions have been published later; I am using the eBook version of Routledge (2002))
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458, February 25, 2021
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.deLast change: March 16, 2021 (Some minor corrections)
Since January 2021 an intense series of posts has been published how the new ideas manifested in the new software published in this journal can adequately be reflected in the DAAI theoretical framework. Because these ideas included in the beginning parts of philosophy, philosophy of science, philosophy of engineering, these posts have been first published in the German Blog of the author (cognitiveagent.org). This series of posts started with an online lecture for students of the University of Leipzig together with students of the ‘Hochschule für Technik, Wirtschaft und Kultur (HTWK)’ January 12, 2021. Here is the complete list of posts:
As described in the original DAAI theory paper the whole topic of HMI is here understood as a job within the systems engineering paradigm.
The specification process is a kind of a ‘test’ whether the DAAI format of the HMI analysis works with this new application too.
To remember, the main points of the integrated engineering concept are the following ones:
A philosophical framework (Philosophy of Science, Philosophy of Engineering, …), which gives the fundamentals for such a process.
The engineering process as such where managers and engineers start the whole process and do it.
After the clarification of the problem to be solved and a minimal vision, where to go, it is the job of the HMI analysis to clarify which requirements have to be fulfilled, to find an optimal solution for the intended product/ service. In modern versions of the HMI analysis substantial parts of the context, i.e. substantial parts of the surrounding society, have to be included in the analysis.
Based on the HMI analysis in the logical design phase a mathematical structure has to be identified, which integrates all requirements sufficiently well. This mathematical structure has to be ‘map-able’ into a set of algorithms written in appropriate programming languages running on an appropriate platform (the mentioned phases Problem, Vision, HMI analysis, Logical Design are in reality highly iterative).
During the implementation phase the algorithms will be translated into a real working system.
Which Kinds of Experts?
While the original version of the DAAI paper is assuming as ‘experts’ only the typical manager and engineers of an engineering process including all the typical settings, the new extended version under the label CM:MI (Collective Man-Machine Intelligence) has been generalized to any kind of human person as an expert, which allows a maximum of diversity. No one is the ‘absolute expert’.
Collective Intelligence
As ‘intelligence’ is understood here the whole of knowledge, experience, and motivations which can be the moving momentum inside of a human person. As ‘collective’ is meant the situation, where more than one person is communicating with other persons to share it’s intelligence.
Man-Machine Symbiosis
Today there are discussions going around about the future of man and (intelligent) machines. Most of these discussions are very weak because they are lacking clear concepts of intelligent machines as well of what is a human person. In the CM:MI paradigm the human person (together with all other biological systems) is seen at the center of the future (by reasons based on modern theories of biological evolution) and the intelligent machines are seen as supporting devices (although it is assumed here to use ‘strong’ intelligence compared to the actual ‘weak’ machine intelligence today).
CM:MI by Design
Although we know, that groups of many people are ‘in principal’ capable of sharing intelligence to define problems, visions, constructing solutions, testing the solutions etc., we know too, that the practical limits of the brains and the communication are quite narrow. For special tasks a computer can be much, much better. Thus the CM:MI paradigm provides an environment for groups of people to do the shared planning and testing in a new way, only using normal language. Thus the software is designed to enable new kinds of shared knowledge about shared common modes of future worlds. Only with such a truly general framework the vision of a sustainable society as pointed out by the United Nations since 1992 can become real.
The starting point of view in this blog has been and still is the point of engineering, especially the perspective of man-machine interface [MMI], later as Man-Machine Interaction, then accompanied by human-computer interaction [HCI] or human-machine interaction [HMI]. While MMI often is discussed in isolation, not as part of engineering, this blog emphasizes a point of view where MMI is understood as an integrated part of systems engineering. The past years have shown, that this integration makes a great difference in the overall layout as well as in the details of the used methods. This integration widened the scope of MMI to the context of engineering in a way which teared down many artificial boundaries in dealing with the subject of MMI. The analysis part of MMI can take into account not only the intended users and a limited set of tasks required for the usage of a system but it can extend the scope to the different kinds of contexts of the intended users as well as the intended service/product as such: cultural patterns, sustainable perspectives, climate relevance, political implications, and more. This triggers the question, whether there are other established scientific disciplines which are sharing this scope with MMI. Traditionally experimental and cognitive psychology has always played an important role as part of the MMI analysis. Different special disciplines like physiology or neuro-psychology, linguistics, phonetics etc. have played some role too. More recently culture and society have been brought more into the focus of MMI. What about sociology? What about anthropology? The following text discusses a possible role of anthropology in the light of the recent book Why The World Needs Anthropologists?
INTRODUCTION AND CONCLUSION
This review has the addendum ‘Part 1’ pointing to the fact, that this text does not deal with the whole book first, but only with some parts, the introduction and the conclusion.
An Introduction
The introduction of the book is asking, why does the world needs anthropologists?, and the main pattern of the introduction looks back to the old picture of anthropology, and then seeks to identify, what could/is the new paradigm which should be followed.
The roots of anthropology are located in the colonial activities of the British Empire as well as in the federal activities of the USA, which both had a strong bias to serve the political power more than to evolve a really free science. And an enduring gap between the more theoretical anthropology and an applied one is thematised although there existed always a strong inter-dependency between both.
To leave the close connection with primarily governmental interests and to see the relation between the theory and the different Applications more positive than negative anthropology is understood as challenged to rebrand its appearance in the public and in their own practice.
The most vital forces for such a rebranding seem to be rooted in more engagements in societal problems of public interests and thereby challenging the theory to widen their concept and methods.
Besides the classical methods of anthropology (cultural relativism, ethnography, comparison, and contextual understanding) anthropology has to show that it can make sense beyond pure data, deciphering ambiguity, complexity, and ambivalence, helping with diversity, investigating the interface between culture, technology, and environment.
What Is Left Out
After the introduction the main chapters of the book are left out in this text until later. The chapters in the book are giving examples to the questions, why the world needs anthropology, what have been the motivations for active anthropologists to become one, how they have applied anthropology, and which five tips they would give for practicing and theorizing.
Conclusion
In the conclusion of the book not the five questions are the guiding principle but ‘five axis that matter greatly’, and these five axis are circumscribed as (i) navigate the ethics of change; (ii) own-it in the sense, that an anthropologist should have a self-esteem for his/ her/ x profession and can co-create it with others; (iii) expand the skill-set; (iv) collaborate, co-create and study-up; (v) recommend as being advisors and consultants.
The stronger commitment with actual societal problems leads anthropology at the crossroads of many processes which require new views, new methods. To gain new knowledge and to do a new practice is not always accompanied by known ethical schemata. Doing this induces ethical questions which have not been known before in this way. While a new practice is challenging the old knowledge and induces a pressure for change, new versions of knowing can trigger new forms of practice as well. Theory and application are a dynamic pair where each part learns from the other.
The long-lasting preference of academic anthropology, thinking predominantly in the mind-setting of white-western-man, is more and more resolved by extending anthropology from academia to application, from man into the diversity of genders, from western culture into all the other cultures, from single persons to assemblies of diverse gatherings living an ongoing discourse with a growing publicity.
This widening of anthropological subjects and methods calls naturally for more interdisciplinarity, transdisciplinarity, and of a constructive attitude which looks ahead to possible futures of processes.
Close to this are expressions like collaboration and co-creation with others. In the theory dimension this is reflected by multiperspectivity and a holistic view. In societal development processes — like urban planning — there are different driving forces acting working top-down or acting working bottom-up.
Recommending solutions based on anthropological thinking ending in a yes or no, can be of help and can be necessary because real world processes can not only wait of final answers (which are often not realistic), they need again and again decisions to proceed now.
REFLECTIONS FOLLOWING THE INTRODUCTION AND THE CONCLUSION
The just referred texts making a fresh impression of a discipline in a dynamic movement.
General Knowledge Architecture
For the point of view of MMI (Man-Machine Interface, later HMI Human-Machine Interaction, in my theory extended to DAAI Distributed Actor-Actor Interaction) embedded in systems engineering with an openness for the whole context of society and culture arises the question whether such a dynamic anthropology can be of help.
To clarify this question let us have a short look to the general architecture of knowledge.
Within the everyday world philosophy can be understood as the most general point of view of knowing and thinking. Traditionally logic and mathematics can be understood as part of philosophy although today this has been changed. But there are no real reasons for this departure: logic and mathematics are not empirical sciences and they are not engineering.
Empirical science can be understood as specialized extension of philosophical thinking with identifiable characteristics which allow to differentiate to some extend different disciplines. Traditionally all the different disciplines of empirical science have a more theoretical part and a more applied part. But systematically they depend from each other. A theory is only an empirical one, if there exists a clear relationship to the everyday world, and certain aspects of the everyday world are only theoretical entities (data) if there exists a relationship to an explicit theory which gives a formal explanation.
Asking for a systematic place for engineering it is often said, that it belongs to the applied dimension of empirical science. But engineering has realized processes, buildings, machines long before there was a scientific framework for to do this, and engineering uses in its engineering processes lots of knowledge which is not part of science. On the other side, yes, engineering is using scientific knowledge as far as it is usable and it is also giving back many questions to science which are not yet solved sufficiently. Therefore it is sound to locate engineering besides science, but being part of philosophy dealing with the practical dimensions of life.
What About Anthropology?
While philosophy (with logic and mathematics) is ‘on top’ of empirical science and engineering, it is an interesting question where to place anthropology?
While empirical science as well as engineering are inheriting all what philosophy provides remains the question whether anthropology is more an empirical science or more engineering or some kind of a hybrid system with roots in empirical science as well as in engineering?
Looking back into history it could arise the impression that anthropology is more a kind of an empirical science with strong roots in academia, but doing fieldwork to feed the theories.
Looking to the new book it could support the image that anthropology should be more like engineering: identifying open problems in society and trying to transform these problems — like engineers — into satisfying solutions, at least on the level of counseling.
Because in our societies the universities have traditionally a higher esteem then the engineers — although the engineers are all trained by highly demanding university courses — it could be a bias in the thinking of anthropologist not to think of their discipline as engineering.
If one looks to the real world than everything which makes human societies livable is realized by engineers. Yes, without science many of the today solutions wouldn’t be possible, but no single scientific theory has ever enabled directly some practical stuff. And without the engineers there would not exist any of the modern machines used for measurements and experiments for science. Thus both are intimately interrelated: science inspires engineering and engineering inspires and enables science, but both are genuinely different and science and engineering play their own fundamental role.
Thus if I am reading the new book as engineer (attention: I am also a philosopher and I am trained in the Humanities too!) then I think there are more arguments to understand anthropology as engineering than as a pure empirical science. In the light of my distributed actor-actor interaction paradigm, which is a ‘spinoff’ of engineering and societal thinking it seems very ‘naturally’ to think of anthropology as a kind of social engineering.
Let us discuss both perspectives a bit more, thereby not excluding the hybrid version.
1) Anthropology as Engineering
The basic idea of engineering is to enable a change process which is completely transparent in all respects: Why, Who, Where, When, How etc. The process starts with explicit preferences turning some known reality into a problem on account of some visions which have been imagined and which have become ranked higher than the given known reality. And then the engineers try to organized an appropriate change process which will lead from the given situation to a new situation until some date in the future where the then given situation — the envisioned goal state — has become real and the situation from the beginning, which has been ranked down, has disappeared, or is at least weakened in a way that one can say, yes, it has changed.
Usually engineers are known to enable change processes which enable the production of everyday things (tools, products, machines, houses, plants, ships, airplanes, …), but to the extend that the engineering is touching the everyday life deeper and deeper (e.g. the global digital revolution absorbing more and more from the real life processes by transforming them into digital realities forcing human persons to act digitally and not any more with their bodies in the everyday world) the sharp boundary between engineering products and the societal life of human persons is vanishing. In such a context engineering is becoming social engineering even if the majority of traditional engineers this doesn’t see yet in this way. As the traditional discipline MMI Man-Machine Interface and then expanded to HMI Human-Machine Interaction and further morphed into DAAI Distributed Actor-Actor Interaction this already manifests, that the realm of human persons, yes the whole of society is already included in engineering. The border between machines and human actors is already at least fuzzy and the mixing of technical devices and human actors (as well as all other biological actors) has already gained a degree which does not allow any longer a separation.
These ideas would argue for the option to see anthropology as social engineering: thematizing all the important visions which seem to be helpful or important for a good future of modern mankind, and to help to organize change processes, which will support approaching this better future. That these visions can fail, can be wrong is part of the ever lasting battle of the homo sapiens to gain the right knowledge.
2) Anthropology as an Empirical Science
… to be continued …
3) Anthropology as a Hybrid Couple of Science and Engineering
Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, Nov 8, 2020
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
In daily life we experience today a multitude of perspectives in all areas. While our bodies are embedded in real world scenarios our minds are filled up with perceptions, emotions, ideas, memories of all kinds. What links us to each other is language. Language gives us the power to overcome the isolation of our individual brains located in individual bodies. And by this, our language, we can distribute and share the inner states of our brains, pictures of life as we see it. And it is this open web of expressions which spreads to the air, to the newspapers and books, to the data bases in which the different views of the world are manifested.
SORTING IDEAS SCIENTIFICALLY
While our bodies touching reality outside the bodies, our brains are organizing different kinds of order, finally expressed — only some part of it — in expressions of some language. While our daily talk is following mostly automatically some naive patterns of ordering does empirical science try to order the expressions more consciously following some self-defined rules called methods, called scientific procedures to enable transparency, repeatability, decidability of the hypothesized truth of is symbolic structures.
But because empirical science wants to be rational by being transparent, repeatable, measurable, there must exist an open discourse which is dealing with science as an object: what are the ingredients of science? Under which conditions can science work? What does it mean to ‘measure’ something? And other questions like these.
PHILOSOPHY OF SCIENCE
That discipline which is responsible for such a discourse about science is not science itself but another instance of thinking and speaking which is called Philosophy ofScience. Philosophy of science deals with all aspects of science from the outside of science.
PHILOSOPHY
Philosophy of Science dealing with empirical sciences as an object has a special focus and it can be reflected too from another point of view dealing with Philosophy of Science as an object. This relationship reflects a general structure of human thinking: every time we have some object of our thinking we are practicing a different point of view talking about the actual object. While everyday thinking leads us directly to Philosophy as our active point of view an object like empirical science does allow an intermediate point of view called Philosophy of Science leading then to Philosophy again.
Philosophy is our last point of reflection. If we want to reflect the conditions of our philosophical thinking than our thinking along with the used language tries to turn back on itself but this is difficult. The whole history of Philosophy shows this unending endeavor as a consciousness trying to explain itself by being inside itself. Famous examples of this kind of thinking are e.g. Descartes, Kant, Fichte, Schelling, Hegel, and Husserl.
These examples show there exists no real way out.
PHILOSOPHY ENHANCED BY EMPIRICAL SCIENCES ? !
At a first glance it seems contradictory that Philosophy and Empirical Sciences could work ‘hand in hand’. But history has shown us, that this is to a certain extend possible; perhaps it is a major break through for the philosophical understanding of the world, especially also of men themselves.
Modern empirical sciences like Biology and Evolutionary Biology in cooperation with many other empirical disciplines have shown us, that the actual biological systems — including homo sapiens — are products of a so-called evolutionary process. And supported by modern empirical disciplines like Ethology, Psychology, Physiology, and Brain Sciences we could gain some first knowledge how our body works, how our brain, how our observable behavior is connected to this body and its brain.
While Philosopher like Kant or Hegel could investigate their own thinking only from the inside of their consciousness, the modern empirical sciences can investigate the human thinking from the outside. But until now there is a gap: We have no elaborated theory about the relationship between the inside of the consciousness and the outside knowledge about body and brain.
Thus what we need is a hybrid theory mapping the inside to the outside and revers. There are some first approaches headed under labels like Neuro-Psychology or Neuro-Phenomenology, but these are not yet completely clarified in their methodology in their relationship to Philosophy.
If one can describe to some extend the Phenomena of the consciousness from the inside as well as the working of the brain translated to its behavioral properties, then one can start first mappings like those, which have been used in this blog to establish the theory for the komega software.
SOCIOLOGY
Sociology is only one empirical discipline between many others. Although the theory of this blog is using many disciplines simultaneously Sociology is of special interest because it is that kind of empirical disciplines which is explicitly dealing with human societies with subsystems called cities.
The komega software which we are developing is understood here as enabling a system of interactions as part of a city understood as a system. If we understand Sociology as an empirical science according to some standard view of empirical science then it is possible to describe a city as an input-output system whose dynamics can become influenced by this komega software if citizens are using this software as part of their behavior.
STANDARD VIEW OF EMPIRICAL SCIENCE
Without some kind of a Standard View of Empirical Science it is not possible to design a discipline — e.g. Sociology — as an empirical discipline. Although it seems that everybody thinks that we have such a ‘Standard View of Empirical Science’, in the real world of today one must state that we do not have such a view. In the 80ties of the20th century it looked for some time as if we have it, but if you start searching the papers, books and schools today You will perceive a very fuzzy field called Philosophy of Science and within the so-called empirical sciences you will not found any coherent documented view of a ‘Standard View of Empirical Science’.
Because it is difficult to see how a process can look like which enables such a ‘Standard View of Empirical Science’ again, we will try to document the own assumptions for our theory as good as possible. Inevitably this will mostly have the character of only a ‘fragment’, an ‘incomplete outline’. Perhaps there will again be a time where sciences is back to have a commonly accepted view how science should look like to be called empirical science.
The context for this text is the whole block dedicated to the AAI (Actor-Actor Interaction) paradigm. The aim of this text is to give the big picture of all dimensions and components of this subject as it shows up during April 2019.
The first dimension introduced is the historical dimension, because this allows a first orientation in the course of events which lead to the actual situation. It starts with the early days of real computers in the thirties and forties of the 20 century.
The second dimension is the engineering dimension which describes the special view within which we are looking onto the overall topic of interactions between human persons and computers (or machines or technology or society). We are interested how to transform a given problem into a valuable solution in a methodological sound way called engineering.
The third dimension is the whole of society because engineering happens always as some process within a society. Society provides the resources which can be used and spends the preferences (values) what is understood as ‘valuable’, as ‘good’.
The fourth dimension is Philosophy as that kind of thinking which takes everything into account which can be thought and within thinking Philosophy clarifies conditions of thinking, possible tools of thinking and has to clarify when some symbolic expression becomes true.
HISTORY
In history we are looking back in the course of events. And this looking back is in a first step guided by the concepts of HCI (Human-Computer Interface) and HMI (Human-Machine Interaction).
It is an interesting phenomenon how the original focus of the interface between human persons and the early computers shifted to the more general picture of interaction because the computer as machine developed rapidly on account of the rapid development of the enabling hardware (HW) the enabling software (SW).
Within the general framework of hardware and software the so-called artificial intelligence (AI) developed first as a sub-topic on its own. Since the last 10 – 20 years it became in a way productive that it now seems to become a normal part of every kind of software. Software and smart software seem to be interchangeable. Thus the new wording of augmented or collective intelligence is emerging intending to bridge the possible gap between humans with their human intelligence and machine intelligence. There is some motivation from the side of society not to allow the impression that the smart (intelligent) machines will replace some day the humans. Instead one is propagating the vision of a new collective shape of intelligence where human and machine intelligence allows a symbiosis where each side gives hist best and receives a maximum in a win-win situation.
What is revealing about the actual situation is the fact that the mainstream is always talking about intelligence but not seriously about learning! Intelligence is by its roots a static concept representing some capabilities at a certain point of time, while learning is the more general dynamic concept that a system can change its behavior depending from actual external stimuli as well as internal states. And such a change includes real changes of some of its internal states. Intelligence does not communicate this dynamics! The most demanding aspect of learning is the need for preferences. Without preferences learning is impossible. Today machine learning is a very weak example of learning because the question of preferences is not a real topic there. One assumes that some reward is available, but one does not really investigate this topic. The rare research trying to do this job is stating that there is not the faintest idea around how a general continuous learning could happen. Human society is of no help for this problem while human societies have a clash of many, often opposite, values, and they have no commonly accepted view how to improve this situation.
ENGINEERING
Engineering is the art and the science to transform a given problem into a valuable and working solution. What is valuable decides the surrounding enabling society and this judgment can change during the course of time. Whether some solution is judged to be working can change during the course of time too but the criteria used for this judgment are more stable because of their adherence to concrete capabilities of technical solutions.
While engineering was and is always a kind of an art and needs such aspects like creativity, innovation, intuition etc. it is also and as far as possible a procedure driven by defined methods how to do things, and these methods are as far as possible backed up by scientific theories. The real engineer therefore synthesizes art, technology and science in a unique way which can not completely be learned in the schools.
In the past as well as in the present engineering has to happen in teams of many, often many thousands or even more, people which coordinate their brains by communication which enables in the individual brains some kind of understanding, of emerging world pictures, which in turn guide the perception, the decisions, and the concrete behavior of everybody. And these cognitive processes are embedded — in every individual team member — in mixtures of desires, emotions, as well as motivations, which can support the cognitive processes or obstruct them. Therefore an optimal result can only be reached if the communication serves all necessary cognitive processes and the interactions between the team members enable the necessary constructive desires, emotions, and motivations.
If an engineering process is done by a small group of dedicated experts — usually triggered by the given problem of an individual stakeholder — this can work well for many situations. It has the flavor of a so-called top-down approach. If the engineering deals with states of affairs where different kinds of people, citizens of some town etc. are affected by the results of such a process, the restriction to a small group of experts can become highly counterproductive. In those cases of a widespread interest it seems promising to include representatives of all the involved persons into the executing team to recognize their experiences and their kinds of preferences. This has to be done in a way which is understandable and appreciative, showing esteem for the others. This manner of extending the team of usual experts by situative experts can be termed bottom-up approach. In this usage of the term bottom-up this is not the opposite to top-down but is reflecting the extend in which members of a society are included insofar they are affected by the results of a process.
SOCIETY
Societies in the past and the present occur in a great variety of value systems, organizational structures, systems of power etc. Engineering processes within a society are depending completely on the available resources of a society and of its value systems.
The population dynamics, the needs and wishes of the people, the real territories, the climate, housing, traffic, and many different things are constantly producing demands to be solved if life shall be able and continue during the course of time.
The self-understanding and the self-management of societies is crucial for their ability to used engineering to improve life. This deserves communication and education to a sufficient extend, appropriate public rules of management, otherwise the necessary understanding and the freedom to act is lacking to use engineering in the right way.
PHILOSOPHY
Without communication no common constructive process can happen. Communication happens according to many implicit rules compressed in the formula who when can speak how about what with whom etc. Communication enables cognitive processes of for instance understanding, explanations, lines of arguments. Especially important for survival is the ability to make true descriptions and the ability to decide whether a statement is true or not. Without this basic ability communication will break down, coordination will break down, life will break down.
The basic discipline to clarify the rules and conditions of true communication, of cognition in general, is called Philosophy. All the more modern empirical disciplines are specializations of the general scope of Philosophy and it is Philosophy which integrates all the special disciplines in one, coherent framework (this is the ideal; actually we are far from this ideal).
Thus to describe the process of engineering driven by different kinds of actors which are coordinating themselves by communication is primarily the task of philosophy with all their sub-disciplines.
Thus some of the topics of Philosophy are language, text, theory, verification of a theory, functions within theories as algorithms, computation in general, inferences of true statements from given theories, and the like.
In this text I apply Philosophy as far as necessary. Especially I am introducing a new process model extending the classical systems engineering approach by including the driving actors explicitly in the formal representation of the process. Learning machines are included as standard tools to improve human thinking and communication. You can name this Augmented Social Learning Systems (ASLS). Compared to the wording Augmented Intelligence (AI) (as used for instance by the IBM marketing) the ASLS concept stresses that the primary point of reference are the biological systems which created and create machine intelligence as a new tool to enhance biological intelligence as part of biological learning systems. Compared to the wording Collective Intelligence (CI) (as propagated by the MIT, especially by Thomas W.Malone and colleagues) the spirit of the CI concept seems to be similar, but perhaps only a weak similarity.
Last change: 23.February 2019 (continued the text)
Last change: 24.February 2019 (extended the text)
CONTEXT
In the overview of the AAI paradigm version 2 you can find this section dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).
THE DAILY LIFE PERSPECTIVE
The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties are associated with the objects, different kinds of relations an changes from one state to another.
From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.
A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.
As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which is further embedded in an environment.
Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are different to the assumed possible real things somewhere ‘out there’.
PHILOSOPHY AS FIRST PERSON VIEW
‘Philosophy’ has many facets. One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge. Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.
There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’ — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’, which are appearing and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp ⊂ PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.
There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy ∩ PH_emp_ext = 0. Because phenomena usually are occurring associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.
Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.
From these considerations we can derive the following informal definitions:
Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).
It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics. Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.
BRIDGING THE GAP BETWEEN BRAINS
As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.
Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.
But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.
The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something else’ or certain objects, and events, objects etc. can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world. These brain structures B_S can then be associated with some sound structures B_A written as a relation MEANING(B_S, B_A). Such a relation realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.
DIFFERENT MODES TO EXPRESS MEANING
Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.
VISUAL ENCODING
A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.
Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.
FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS
Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words. The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.
If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.
FORMAL MATHEMATICAL WRITTEN EXPRESSIONS
Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am focusing on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’ the intended ‘meaning’ of these formal mathematical expressions.
One prominent concept in mathematics is the concept of a ‘graph’. In the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes. Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.
In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.
If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.
As a state of affairs I understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’ is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.
The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:
TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
NEIGHBOR(T1,T2)
CELL(C1), POSITION(1,2), FREE(C1)
‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:
These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation in combination with the atmosphere.
Let us stay with a state of affairs with only a situation without a context. The state of affairs is a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:
Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 ∪ {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:
This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.
This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance by some reason the white token should be removed from the situation one could write:
These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.
If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:
PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)
In that case one could enhance the change statement in the following way:
This would read as follows: given state S1 the player PB1 inserts a black token at position (2,2); this yields a new state S2.
With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:
RANDOM
NOT RANDOM, which can be specified as follows:
With PROBABILITIES (classical, quantum probability, …)
DETERMINISTIC
Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.
Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.
Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.
THIRD PERSON VIEW
Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation. It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.
A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.
Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.
In a simpler, ‘relaxed’ kind of modeling one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.
This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.
HIDDEN WORLD PROCESSES
In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.
As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.
It is possible to extend such an actor story (AS) by several actor models (AM).
An actor story defines the situations in which an actor can occur. This includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.
The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.
While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way, the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.
A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!
COGNITIVE EXPERT PROCESSES
According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.
Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.
Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:
Speaker and hearer presuppose a space within which objects with properties can occur.
Changes can happen which presuppose some timely ordering.
There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
There are different kinds of relations between objects on different conceptual levels.
The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can use parts of the unconscious structures and processes under strictly limited conditions.
To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the fourth chapter dealing with the epistemology of actors within an AAI analysis process.
EPISTEMOLOGY AND THE EMPIRICAL SCIENCES
Epistemology is a sub-discipline of general philosophy. While a special discipline in empirical science is defined by a certain sub-set of the real world RW by empirical measurement methods generating empirical data which can be interpreted by a formalized theory, philosophy is not restricted to a sub-field of the real world. This is important because an empirical discipline has no methods to define itself. Chemistry e.g. can define by which kinds of measurement it is gaining empirical data and it can offer different kinds of formal theories to interpret these data including inferences to forecast certain reactions given certain configurations of matters, but chemistry is not able to explain the way how a chemist is thinking, how the language works which a chemist is using etc. Thus empirical science presupposes a general framework of bodies, sensors, brains, languages etc. to be able to do a very specialized — but as such highly important — job. One can define ‘philosophy’ then as that kind of activity which tries to clarify all these conditions which are necessary to do science as well as how cognition works in the general case.
Given this one can imagine that philosophy is in principle a nearly ‘infinite’ task. To get not lost in this conceptual infinity it is recommended to start with concrete processes of communications which are oriented to generate those kinds of texts which can be shown as ‘related to parts of the empirical world’ in a decidable way. This kind of texts is here called ’empirically sound’ or ’empirically true’. It is to suppose that there will be texts for which it seems to be clear that they are empirically sound, others will appear ‘fuzzy’ for such a criterion, others even will appear without any direct relation to empirical soundness.
In empirical sciences one is using so-called empirical measurement procedures as benchmarks to decided whether one has empirical data or not, and it is commonly assumed that every ‘normal observer’ can use these data as every other ‘normal observer’. But because individual, single data have nearly no meaning on their own one needs relations, sets of relations (models) and even more complete theories, to integrate the data in a context, which allows some interpretation and some inferences for forecasting. But these relations, models, or theories can not directly be inferred from the real world. They have to be created by the observers as ‘working hypotheses’ which can fit with the data or not. And these constructions are grounded in highly complex cognitive processes which follow their own built-in rules and which are mostly not conscious. ‘Cognitive processes’ in biological systems, especially in human person, are completely generated by a brain and constitute therefore a ‘virtual world’ on their own. This cognitive virtual world is not the result of a 1-to-1 mapping from the real world into the brain states. This becomes important in that moment where the brain is mapping this virtual cognitive world into some symbolic language L. While the symbols of a language (sounds or written signs or …) as such have no meaning the brain enables a ‘coding’, a ‘mapping’ from symbolic expressions into different states of the brain. In the light’ of such encodings the symbolic expressions have some meaning. Besides the fact that different observers can have different encodings it is always an open question whether the encoded meaning of the virtual cognitive space has something to do with some part of the empirical reality. Empirical data generated by empirical measurement procedures can help to coordinate the virtual cognitive states of different observers with each other, but this coordination is not an automatic process. Empirically sound language expressions are difficult to get and therefore of a high value for the survival of mankind. To generate empirically sound formal theories is even more demanding and until today there exists no commonly accepted concept of the right format of an empirically sound theory. In an era which calls itself ‘scientific’ this is a very strange fact.
EPISTEMOLOGY OF THE AAI-EXPERTS
Applying these general considerations to the AAI experts trying to construct an actor story to describe at least one possible path from a start state to a goal state, one can pick up the different languages the AAI experts are using and asking back under which conditions these languages have some ‘meaning’ and under which conditions these meanings can be called ’empirically sound’?
In this book three different ‘modes’ of an actor story will be distinguished:
A textual mode using some ordinary everyday language, thus using spoken language (stored in an audio file) or written language as a text.
A pictorial mode using a ‘language of pictures’, possibly enhanced by fragments of texts.
A mathematical mode using graphical presentations of ‘graphs’ enhanced by symbolic expressions (text) and symbolic expressions only.
For every mode it has to be shown how an AAI expert can generate an actor story out of the virtual cognitive world of his brain and how it is possible to decided the empirical soundness of the actor story.
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458