Category Archives: memory

CHATGPT4 – PHILOSOPHY : Digging into the Human Meaning Space

Last Change: Aug 14, 2024

Inquirer: Gerd Doeben-Henisch

Email: info@uffmm.org

CONTEXT

This post is part of the section Dialogues with chatGPT4.

SUMMARY

Inquirer to chatGPT4: I will take the following short summary as bridge to the follow-up ideas : (1) Language expressions are composed of symbols wich are determined by rules and structures; (2) The inner states related to meaning are diverse and involve various cognitive, emotional, and sensory processes; (3) At the core are mental models that represent our understanding of the world. These models include abstract ideas, concrete objects, and complex scenarios. Furthermore the mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. (4) Cognitive processes like inference and reasoning help us by connecting existing knowledge with knowledge about possible changes, to derive possible new states. Can you agree with this?

ChatGPT4: Yes, I agree with your summary. You’ve captured the essential points about how language expressions, inner states, mental models, and cognitive processes interact to create and understand meaning.

CHATGPT4 – PHILOSOPHY : Digging into the Human Meaning Space

Hi Chat, three days ago we reached the following conclusion centering around the ‘human meaning space’ : “Philosophical texts themselves are important, but they gain their true significance within the human meaning space that individuals create through their interactions with these texts and with each other. This meaning space, rooted in the experiences and interpretations of individual persons, is what ultimately forms the building blocks of a unified philosophical tradition. The tradition is not a static collection of texts, but a dynamic, evolving process of meaning-making that is constantly being enriched and redefined by those who engage with it.” Do you know something about the structure and dynamics of this meaning space?

The concept of a **human meaning space** is a fascinating and complex idea, touching on cognitive science, linguistics, philosophy, and social theory. While the idea isn’t always explicitly framed in these terms, we can explore the structure and dynamics of this meaning space by drawing on various theories and concepts from these fields.

### 1. **Structure of the Human Meaning Space**:
The structure of the human meaning space can be understood as a multi-layered, interconnected web of meanings, built up through individual and collective experiences, language, and cultural practices. Here are some key elements:

– **Conceptual Frameworks**: At the core of the meaning space are conceptual frameworks—sets of basic concepts and categories that individuals and cultures use to organize and interpret the world. These include ideas about time, space, causality, identity, morality, and so on. Different cultures and individuals may have different conceptual frameworks, but they often share certain universal elements.

– **Language**: Language plays a central role in structuring the meaning space. Words, sentences, and narratives are the tools through which we express and communicate our thoughts and experiences. The structure of language, including grammar, syntax, and semantics, shapes how we perceive and describe the world (?)

– **Cultural Symbols and Norms**: Beyond language (?), cultural symbols (like religious icons, flags, or even corporate logos) and social norms (like laws or customs) contribute to the structure of the meaning space. These elements provide shared references and guide behavior, helping individuals navigate the social world.

– **Individual and Collective Memory**: The meaning space is also shaped by memory—both individual memories of personal experiences and collective memories shared by a community or society. These memories influence how we interpret new experiences and how we understand our place in the world.

### 2. **Dynamics of the Human Meaning Space**:
The meaning space is not static; it is constantly evolving as individuals and communities interact with the world and each other. Here are some key dynamics:

– **Interpretation and Reinterpretation**: Individuals continuously interpret and reinterpret experiences and texts. New experiences, ideas, and cultural influences can lead to shifts in how concepts are understood, how language is used, and how meaning is constructed. This process of reinterpretation is central to the evolution of philosophical traditions.

– **Dialogue and Discourse**: Meaning is often shaped and reshaped through dialogue and discourse. When individuals engage in conversation or debate, they exchange ideas, challenge assumptions, and co-create new understandings. This dynamic exchange helps to refine and expand the meaning space.

– **Cultural Evolution and Adaptation**: As cultures evolve, so too does the meaning space. New technologies, social movements, and global interactions introduce new concepts and symbols into the cultural lexicon, which can alter or expand the collective meaning space. For example, the advent of digital technology has introduced new concepts like “virtual reality” and “cyberspace” that reshape our understanding of reality and communication.

– **Learning and Education**: The transmission of knowledge through education plays a crucial role in the dynamics of the meaning space. As individuals learn from teachers, texts, and experiences, they internalize and modify the existing meaning space, contributing to its continuity and transformation.

– **Emergence of New Concepts**: Sometimes, entirely new concepts emerge within the meaning space, often as a response to new challenges or discoveries. These emergent concepts can reshape the structure of the meaning space, leading to new ways of thinking and understanding.

### 3. **Philosophical Implications**:
– **Hermeneutics**: The study of interpretation, especially of texts, is central to understanding the dynamics of the meaning space. Hermeneutics explores how meanings are constructed, how texts are understood, and how interpretations evolve over time.

– **Social Constructivism**: This theory posits that much of what we consider reality is socially constructed through language, culture, and shared practices. The meaning space, in this view, is a collective construction, shaped by social interactions and cultural norms.

– **Phenomenology**: This philosophical approach focuses on the structures of experience and consciousness. It emphasizes how individuals perceive and make sense of the world, highlighting the subjective aspect of the meaning space.

### Conclusion:
The **human meaning space** is a dynamic, multi-layered structure shaped by language, culture, memory, and individual experiences. It evolves through interpretation, dialogue, cultural change, and the emergence of new ideas. Understanding this space requires considering both its structural elements and the dynamic processes that continuously shape and reshape it. This concept helps explain how philosophical traditions remain vibrant and relevant, constantly adapting to new contexts and challenges through the interplay of shared meanings and individual interpretations.

Hi Chat, it seems to me, that I have to differentiate my question a bit more: Yes, the ‘human meaning space’ includes some ‘common feature’ between different individual humans which is manifested by the shared language. But the ‘meaning’ of a languag expression is bound to the inner processes of each individual person. Therefore the usage of the samelanguage expression does not garantee that different individual persons associate automatically the ‘same meaning’ with a certain language expresseion. Would you agree?

Yes, I completely agree with your differentiation. The idea that the meaning of a language expression is tied to the internal processes of each individual person is crucial. Here’s how this distinction can be understood:

### 1. **Shared Language vs. Individual Interpretation**:
– **Shared Language**: Language provides a common framework or set of symbols that allows individuals to communicate. This shared aspect of language is what enables us to exchange ideas, convey information, and engage in dialogue.
– **Individual Interpretation**: However, the **meaning** that each person associates with a particular language expression is deeply influenced by their personal experiences, emotions, memories, and cognitive processes. As a result, even though two people might use the same word or phrase, they might not associate it with exactly the same meaning.

### 2. **The Gap Between Expression and Interpretation**:
– **Subjectivity of Meaning**: Meaning is inherently subjective because it’s constructed within the mind of each individual. When someone hears or reads a word, their brain processes it based on their unique set of experiences and mental associations. For example, the word “home” might evoke a sense of warmth and security in one person but might bring up feelings of discomfort or displacement in another, depending on their personal history.
– **Context and Nuance**: The context in which a word or phrase is used also plays a significant role in how it’s interpreted. The same word can carry different meanings in different situations, and individuals might pick up on different nuances depending on their personal context.

### 3. **Communication and Miscommunication**:
– **Potential for Misunderstanding**: Because of these individual differences in interpretation, there is always the potential for misunderstanding or miscommunication. People might think they are conveying a certain meaning when using a word, but the listener or reader might interpret it differently based on their own mental framework.
– **Clarification Through Dialogue**: This is why dialogue and conversation are so important. By engaging in discussion, individuals can clarify their meanings, ask questions, and work toward a shared understanding, even if their initial interpretations were different.

### 4. **Philosophical Perspectives**:
– **Hermeneutics**: This branch of philosophy, particularly associated with thinkers like Hans-Georg Gadamer, emphasizes the importance of interpretation in understanding meaning. Hermeneutics suggests that every act of communication involves a process of interpretation, and this process is influenced by the interpreter’s historical and cultural context.
– **Wittgenstein’s Language Games**: Ludwig Wittgenstein introduced the idea of “language games,” where the meaning of words is understood through their use in specific contexts or activities. This highlights how meaning can vary depending on the “game” being played, and how understanding is often a matter of grasping the rules of that particular game, which can differ from one person to another.

### 5. **Implications for the Human Meaning Space**:
– The **human meaning space** is not a monolithic or uniform entity. It’s made up of countless individual meaning spaces that overlap and intersect in complex ways. The shared language provides a bridge between these individual spaces, but the meanings that cross this bridge are filtered through each person’s unique interpretive framework.
– **Interpersonal Interaction**: The richness of human communication comes from this diversity of meaning, but it also requires effort to navigate and understand. Building a shared understanding often involves recognizing and respecting these differences, and finding ways to connect across them.

### Conclusion:
While language gives us a shared set of symbols and expressions, the meanings attached to those expressions are deeply personal and subjective. The usage of the same language expression does not guarantee that different individuals will associate the same meaning with it, due to the influence of their unique internal processes and experiences. This complexity is both a challenge and a richness in human communication, requiring ongoing effort to achieve mutual understanding.

OK, let us take the following statement as starting point for the next ideas: “The human meaning space is made up of countless individual meaning spaces that overlap and intersect in complex ways. … Building a shared understanding often involves recognizing and respecting these differences, and finding ways to connect across them.” Here one follow-up question : How can a human actor A with his individual meaning space coordinate his individual meaning space with the individul meaning space of another human actor B?

Coordinating individual meaning spaces between two human actors, A and B, involves a dynamic process of communication, negotiation, and mutual understanding. This process requires both actors to engage in various strategies that allow them to bridge the differences in their interpretations and experiences. Here are some key ways in which this coordination can occur:

### 1. **Active Communication and Dialogue**:
– **Expressing and Clarifying Meanings**: Actor A can express their thoughts, feelings, and interpretations clearly (?) to Actor B, and vice versa. This involves articulating not just what they think, but why they think it, including the experiences or reasoning behind their interpretations. Clarification questions, paraphrasing, and feedback loops are critical in ensuring that both parties understand each other’s intended meanings.
– **Active Listening**: Effective coordination requires not just speaking but also listening. Actor A needs to actively listen to Actor B’s explanations and interpretations, and vice versa. Active listening involves paying close attention, avoiding assumptions, and being open to understanding the other’s perspective without immediately judging or dismissing it.

### 2. **Contextualization**:
– **Providing Context**: Often, misunderstandings arise because different contexts or background information shape each actor’s interpretation. By providing context—explaining the background, the circumstances, or the specific meaning of terms as they understand them—Actor A can help Actor B align with their meaning space, and vice versa.
– **Shared Experiences**: Drawing on shared experiences can help align meaning spaces. If Actors A and B have a common history or have shared certain experiences, referencing these can create a common ground for understanding.

### 3. **Negotiation of Meaning**:
– **Negotiation and Compromise**: Sometimes, coordination requires negotiation, where both actors adjust their interpretations to reach a common understanding. This might involve redefining terms, agreeing on certain interpretations, or finding middle ground that accommodates both perspectives.
– **Iterative Refinement**: Actor A might initially explain something, Actor B might respond with their interpretation, and through back-and-forth exchanges, both actors refine their understanding until they reach a shared meaning.

### 4. **Use of Metaphors and Analogies**:
– **Metaphors and Analogies**: These tools can be powerful for bridging gaps in meaning spaces. By comparing a concept to something familiar to both actors, they can better align their interpretations. For example, Actor A might use an analogy that resonates with Actor B’s experiences, helping to convey a concept in a way that B can relate to more easily.

### 5. **Empathy and Perspective-Taking**:
– **Empathy**: Empathy involves trying to understand the other actor’s feelings and perspective. By empathizing with Actor B, Actor A can better grasp why B interprets things in a certain way, and this understanding can help in coordinating their meaning spaces.
– **Perspective-Taking**: This is the cognitive aspect of empathy, where Actor A consciously attempts to see things from Actor B’s viewpoint. By mentally putting themselves in B’s shoes, A can adjust their communication to better resonate with B’s meaning space.

### 6. **Cultural and Social Awareness**:
– **Awareness of Cultural Differences**: If Actors A and B come from different cultural backgrounds, their meaning spaces may differ significantly. Being aware of these cultural differences and how they might influence interpretation can help in adjusting communication strategies to better align meaning spaces.
– **Social Norms and Conventions**: Understanding and utilizing social norms that both actors share can also help in coordinating meaning spaces. These norms provide a framework for expected behaviors and interpretations, making it easier to find common ground.

### 7. **Feedback Mechanisms**:
– **Seeking and Providing Feedback**: Actor A can ask Actor B whether their message was understood as intended, and B can do the same. This feedback helps identify where misalignments in meaning might exist and provides an opportunity to correct them.
– **Continuous Adjustment**: Feedback allows for continuous adjustment. As Actors A and B continue to interact, they can refine their communication and interpretation strategies, leading to better alignment over time.

### 8. **Language and Conceptual Framing**:
– **Shared Vocabulary**: Developing a shared vocabulary, where both actors agree on the meanings of key terms, can facilitate coordination. This might involve explicitly defining terms or agreeing on certain concepts before proceeding with a discussion.
– **Conceptual Framing**: Framing discussions in terms of concepts or ideas that are familiar to both actors can help in aligning their meaning spaces. If both actors understand a concept similarly, they can use it as a reference point for further discussion.

### Conclusion:
Coordinating individual meaning spaces between two human actors involves a complex interplay of communication, empathy, negotiation, and shared context. By engaging in active dialogue, providing context, using metaphors, exercising empathy, and seeking feedback, Actors A and B can bridge their differences and build a shared understanding. This process is iterative and dynamic, requiring ongoing effort and adjustment as both actors navigate their individual meaning spaces and work toward alignment.

FRAGE: OK, I like to keep the following statements as minimal requirements : (1) Effective coordination requires not just speaking but also listening. (2) If Actors A and B have a common history or have shared experiences, referencing these can create a common ground for understanding. Feedback allows for continuous adjustment. (3) (1) – (2) can support the development of a shared vocabulary. Would you agree?

Yes, I agree with your summary. The three statements you’ve outlined capture essential aspects of how effective coordination between individual meaning spaces can occur. Here’s a brief reflection on each point:

1. **Effective coordination requires not just speaking but also listening**:
– This emphasizes the importance of active, two-way communication. Listening is crucial for understanding the other person’s perspective, which is necessary for aligning meaning spaces.

2. **If Actors A and B have a common history or have shared experiences, referencing these can create a common ground for understanding. Feedback allows for continuous adjustment**:
– Shared experiences and history provide a foundation for mutual understanding. When actors reference these shared elements, they can more easily connect their meaning spaces. Feedback mechanisms allow both actors to continuously refine and adjust their understanding, ensuring that misalignments are addressed.

3. **(1) – (2) can support the development of a shared vocabulary**:
– Through effective communication and the use of shared experiences, Actors A and B can gradually develop a shared vocabulary. This shared vocabulary helps streamline future interactions by establishing common definitions and meanings for key terms, reducing potential misunderstandings.

These statements encapsulate the core processes that enable individuals to coordinate their meaning spaces and build a deeper, more effective understanding with each other.

Hi Chat, let us take a look into the ‘inner structure’ of an individual meaning space: (i) On the one side of meaning we habe the ‘expressions of some language’ which is ‘internally represented’. (ii) On the other side we have a rich diversity of inner states of the human actor which deliver the ‘stuff’ which we call ‘meaning’ and which is ‘related’ to the ‘internally represented expressions’ of the used language. Can you describe a bit the ‘structure’ of these inner states related to language expressions?

The **inner structure** of an individual meaning space, especially how it relates to language expressions, is a complex and layered system. This system involves the interaction between **internally represented language expressions** (i.e., words, sentences, symbols) and the **inner states** of the human actor that give rise to meaning. Let’s explore this in more detail:

### 1. **Language Expressions and Internal Representations**:
– **Symbols and Words**: Language expressions are composed of symbols (letters, words, sounds) that are internally represented in the brain. These representations are tied to specific neural patterns or cognitive structures that store and process linguistic information.
– **Syntax and Grammar**: These internal representations are not just individual words but also the rules and structures (syntax, grammar) that determine how these words can be combined to form meaningful sentences. This layer of structure helps the mind organize language into coherent thoughts and communications.

### 2. **Inner States and Their Structure**:
The **inner states** related to meaning are diverse and involve various cognitive, emotional, and sensory processes. Here are some key components:

– **Conceptual Structures**:
– **Mental Models and Concepts**: At the core of the meaning-making process are **conceptual structures**—the mental models and concepts that represent our understanding of the world. These models include abstract ideas (like justice or beauty), concrete objects (like a tree or a car), and complex scenarios (like a conversation or a journey).
– **Schemas and Frames**: These are cognitive structures that organize knowledge and guide interpretation. A schema might represent a typical event (like “going to a restaurant”), while a frame provides a perspective or context for understanding a situation (like seeing an argument as a debate or a conflict).

– **Emotional and Affective States**:
– **Emotions**: Emotions play a significant role in the meaning space. When we process language, the meaning we derive is often colored by our emotional state. For instance, the word “home” might trigger feelings of warmth and safety for one person, while for another, it might evoke feelings of sadness or loss.
– **Affective Associations**: These are the emotional responses tied to specific concepts or experiences. They contribute to how strongly or weakly certain meanings resonate with us and can influence how we interpret language.

– **Sensory and Perceptual Experiences**:
– **Sensory Memories**: Sensory experiences (like sights, sounds, smells) are often tied to language expressions. For example, the word “ocean” might bring up the smell of saltwater, the sound of waves, and the feeling of sand underfoot, all of which contribute to the word’s meaning.
– **Imagery**: The mind often creates mental images in response to language. These images are not just visual but can involve other senses as well. This sensory component enriches the meaning associated with language expressions.

– **Cognitive Processes**:
– **Inference and Reasoning**: Cognitive processes like inference and reasoning help us make sense of language by connecting new information with existing knowledge. When we hear a sentence, we use reasoning to infer the speaker’s intent, predict outcomes, or draw connections with other ideas.
– **Attention and Focus**: The focus of attention determines which aspects of a language expression we emphasize and how we interpret its meaning. What we pay attention to can shape the meaning we derive from a given expression.

### 3. **Relational and Associative Networks**:
– **Associative Networks**: The mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. For example, the word “winter” might be associated with cold weather, snow, holidays, and feelings of coziness. These associations form a web of meaning that influences how language expressions are understood.
– **Relational Dynamics**: Meaning is often understood in terms of relationships between concepts. For instance, understanding the concept of “parent” is related to understanding “child,” “family,” and “care.” These relational dynamics are part of the inner structure that gives rise to meaning.

### 4. **Integration of Inner States and Language**:
– **Embodied Cognition**: This theory suggests that our cognition is deeply rooted in bodily experiences. The meaning of language is not just an abstract mental process but is also tied to how we physically interact with the world. For instance, understanding the word “grasp” involves not just the mental concept but also the sensory-motor experience of physically grasping something.
– **Contextual Integration**: The meaning of a language expression is often determined by the context in which it is used. This context includes both external factors (like the situation or environment) and internal states (like emotions or prior knowledge). The integration of these contextual factors is crucial in shaping meaning.

### Conclusion:
The inner structure of an individual meaning space is a complex system where **language expressions** are connected to a rich diversity of **inner states**. These inner states include conceptual structures, emotional and sensory experiences, cognitive processes, and associative networks. Together, these elements create a dynamic and multi-dimensional web of meaning that allows individuals to interpret and understand language in highly personalized ways. The interaction between these inner states and language is what gives rise to the meaning that individuals derive from their linguistic experiences.

I will take the following short summary as bridge to the follow-up ideas : (1) Language expressions are composed of symbols wich are determined by rules and structures; (2) The inner states related to meaning are diverse and involve various cognitive, emotional, and sensory processes; (3) At the core are mental models that represent our understanding of the world. These models include abstract ideas, concrete objects, and complex scenarios. Furthermore the mind organizes concepts, emotions, sensory experiences, and language expressions into a network of associations. (4) Cognitive processes like inference and reasoning help us by connecting existing knowledge with knowledge about possible changes, to derive possible new states. Can you agree with this?

Yes, I agree with your summary. You’ve captured the essential points about how language expressions, inner states, mental models, and cognitive processes interact to create and understand meaning. Here’s a brief recap:

1. **Language Expressions**: These are composed of symbols (words, phrases) that are structured according to the rules of grammar and syntax, enabling coherent communication.

2. **Diverse Inner States**: Meaning arises from a complex interplay of cognitive, emotional, and sensory processes within the mind. These processes influence how we interpret and understand language.

3. **Mental Models and Associations**: At the core of meaning-making are mental models, which represent our understanding of the world. These models include abstract concepts, concrete objects, and complex scenarios. The mind organizes these elements into a network of associations, connecting various concepts, emotions, and sensory experiences.

4. **Cognitive Processes**: Inference, reasoning, and other cognitive processes allow us to connect existing knowledge with new information or hypothetical changes, enabling us to derive new states or understand potential outcomes.

This summary effectively bridges the foundational ideas we’ve discussed, setting the stage for deeper exploration into how these components work together in more complex scenarios or applications.

OK, let us stop here for today. I will come back to this issue to continue. Best regards

That sounds great! I’m glad we could delve into these ideas together. Feel free to reach out anytime you’re ready to continue. Best regards, and take care! 😊

Talking about the world

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 5, 2024, Last change: June 7, 2024)

Starting Point

A ‘text’ shall be written that speaks about the world, including all living beings, with ‘humans’ as the authors in the first instance. So far, we know of no cases where animals or plants write texts themselves: their view of life. We only know of humans who write from ‘their human perspective’ about life, animals, and plants. Much can be criticized about this approach. Upon further reflection, one might even realize that ‘humans writing about other humans and themselves’ is not so trivial either. Even humans writing ‘about themselves’ is prone to errors, can go completely ‘awry,’ can be entirely ‘wrong,’ which raises the question of what is ‘true’ or ‘false.’ Therefore, we should spend some thoughts on how we humans can talk about the world and ourselves in a way that gives us a chance not just to ‘fantasize,’ but to grasp something that is ‘real,’ something that describes what truly characterizes us as humans, as living beings, as inhabitants of this planet… but then the question pops up again, what is ‘real’? Are we caught in a cycle of questions with answers, where the answers themselves are again questions upon closer inspection?

First Steps

Life on Planet Earth

At the start of writing, we assume that there is a ‘Planet Earth’ and on this planet there is something we call ‘life,’ and we humans—belonging to the species Homo sapiens—are part of it.

Language

We also assume that we humans have the ability to communicate with each other using sounds. These sounds, which we use for communication, we call here ‘speech sounds’ to indicate that the totality of sounds for communication forms a ‘system’ which we ultimately call ‘language.’

Meaning

Since we humans on this planet can use completely different sounds for the ‘same objects’ in the same situation, it suggests that the ‘meaning’ of speech sounds is not firmly tied to the speech sounds themselves, but somehow has to do with what happens ‘in our minds.’ Unfortunately, we cannot look ‘into our minds.’ It seems a lot happens there, but this happening in the mind is ‘invisible.’ Nevertheless, in ‘everyday life,’ we experience that we can ‘agree’ with others whether it is currently ‘raining’ or if it smells ‘bad’ or if there is a trash bin on the sidewalk blocking the way, etc. So somehow, the ‘happenings in the mind’ seem to have certain ‘agreements’ among different people, so that not only I see something specific, but the other person does too, and we can even use the same speech sounds for it. And since a program like chatGPT can translate my German speech sounds, e.g., into English speech sounds, I can see that another person who does not speak German, instead of my word ‘Mülltonne,’ uses the word ‘trash bin’ and then nods in agreement: ‘Yes, there is a trash bin.’ Would that be a case for a ‘true statement’?

Changes and Memories

Since we experience daily how everyday life constantly ‘changes,’ we know that something that just found agreement may no longer find it the next moment because the trash bin is no longer there. We can only notice these changes because we have something called ‘memory’: we can remember that just now at a specific place there was a trash bin, but now it’s not. Or is this memory just an illusion? Can I trust my memory? If now everyone else says there was no trash bin, but I remember there was, what does that mean?

Concrete Body

Yes, and then my body: time and again I need to drink something, eat something, I’m not arbitrarily fast, I need some space, … my body is something very concrete, with all sorts of ‘sensations,’ ‘needs,’ a specific ‘shape,’ … and it changes over time: it grows, it ages, it can become sick, … is it like a ‘machine’?

Galaxies of Cells

Today we know that our human body resembles less a ‘machine’ and more a ‘galaxy of cells.’ Our body has about 37 trillion (10¹²) body cells with another 100 trillion cells in the gut that are vital for our digestive system, and these cells together form the ‘body system.’ The truly incomprehensible thing is that these approximately 140 trillion cells are each completely autonomous living beings, with everything needed for life. And if you know how difficult it is for us as humans to maintain cooperation among just five people over a long period, then you can at least begin to appreciate what it means that 140 trillion beings manage to communicate and coordinate actions every second—over many years, even decades—so that the masterpiece ‘human body’ exists and functions.

Origin as a Question

And since there is no ‘commander’ who constantly tells all the cells what to do, this ‘miracle of the human system’ expands further into the dimension of where the concept comes from that enables this ‘super-galaxy of cells’ to be as they are. How does this work? How did it arise?

Looking Behind Phenomena

In the further course, it will be important to gradually penetrate the ‘surface of everyday phenomena’ starting from everyday life, to make visible those structures that are ‘behind the phenomena,’ those structures that hold everything together and at the same time constantly move, change everything.

Fundamental Dimension of Time

All this implies the phenomenon ‘time’ as a basic category of all reality. Without time, there is also no ‘truth’…

[1] Specialists in brain research will of course raise their hand right away, and will want to say that they can indeed ‘look into the head’ by now, but let’s wait and see what this ‘looking into the head’ entails.

[2] If we assume for the number of stars in our home galaxy, the Milky Way, with an estimated 100 – 400 billion stars that there are 200 billion, then our body system would correspond to the scope of 700 galaxies in the format of the Milky Way, one cell for one star.

[3] Various disciplines of natural sciences, especially certainly evolutionary biology, have illuminated many aspects of this mega-wonder partially over the last approx. 150 years. One can marvel at the physical view of our universe, but compared to the super-galaxies of life on Planet Earth, the physical universe seems downright ‘boring’… Don’t worry: ultimately, both are interconnected: one explains the other…”

Telling Stories

Fragments of Everyday Life—Without Context

We constantly talk about something: the food, the weather, the traffic, shopping prices, daily news, politics, the boss, colleagues, sports events, music, … mostly, these are ‘fragments’ from the larger whole that we call ‘everyday life’. People in one of the many crisis regions on this planet, especially those in natural disasters or even in war…, live concretely in a completely different world, a world of survival and death.

These fragments in the midst of life are concrete, concern us, but they do not tell a story by themselves about where they come from (bombs, rain, heat,…), why they occur, how they are connected with other fragments. The rain that pours down is a single event at a specific place at a specific time. The bridge that must be closed because it is too old does not reveal from itself why this particular bridge, why now, why couldn’t this be ‘foreseen’? The people who are ‘too many’ in a country or also ‘too few’: Why is that? Could this have been foreseen? What can we do? What should we do?

The stream of individual events hits us, more or less powerfully, perhaps even simply as ‘noise’: we are so accustomed to it that we no longer even perceive certain events. But these events as such do not tell a ‘story about themselves’; they just happen, seemingly irresistibly; some say ‘It’s fate’.

Need for Meaning

It is notable that we humans still try to give the whole a ‘meaning’, to seek an ‘explanation’ for why things are the way they are. And everyday life shows that we have a lot of ‘imagination’ concerning possible ‘connections’ or ’causes’. Looking back into the past, we often smile at the various attempts at explanation by our ancestors: as long as nothing was known about the details of our bodies and about life in general, any story was possible. In our time, with science established for about 150 years, there are still many millions of people (possibly billions?) who know nothing about science and are willing to believe almost any story just because another person tells this story convincingly.

Liberation from the Moment through Words

Because of this ability, with the ‘power of imagination’ to pack things one experiences into a ‘story’ that suggests ‘possible connections’, through which events gain a ‘conceptual sense’, a person can try to ‘liberate’ themselves from the apparent ‘absoluteness of the moment’ in a certain way: an event that can be placed into a ‘context’ loses its ‘absoluteness’. Just by this kind of narrative, the experiencing person gains a bit of ‘power’: in narrating a connection, the narrator can make the experience ‘a matter’ over which they can ‘dispose’ as they see fit. This ‘power through the word’ can alleviate the ‘fear’ that an event can trigger. This has permeated the history of humanity from the beginning, as far as archaeological evidence allows.

Perhaps it is not wrong to first identify humans not as ‘hunters and gatherers’ or as ‘farmers’ but as ‘those who tell stories’.

[1] Such a magic word in Greek philosophy was the concept of ‘breath’ (Greek “pneuma”). The breath not only characterized the individually living but was also generalized to a life principle of everything that connected both body, soul, and spirit as well as permeated the entire universe. In the light of today’s knowledge, this ‘explanation’ could no longer be told, but about 2300 years ago, this belief was a certain ‘intellectual standard’ among all intellectuals, the prevailing ‘worldview’; it was ‘believed’. Anyone who thought differently was outside this ‘language game’.

Organization of an Order

Thinking Creates Relationships

As soon as one can ‘name’ individual events, things, processes, properties of things, and more through ‘language’, it is evident that humans have the ability to not only ‘name’ using language but to embed the ‘named’ through ‘arrangement of words in linguistic expression’ into ‘conceived relationships’, thereby connecting the individually named items not in isolation but in thought with others. This fundamental human ability to ‘think relationships in one’s mind’, which cannot be ‘seen’ but can indeed be ‘thought’ [1], is of course not limited to single events or a single relationship. Ultimately, we humans can make ‘everything’ a subject, and we can ‘think’ any ‘possible relationship’ in our minds; there are no fundamental restrictions here.

Stories as a Natural Force

Not only history is full of examples, but also our present day. Today, despite the incredible successes of modern science, almost universally, the wildest stories with ‘purely thought relationships’ are being told and immediately believed through all channels worldwide, which should give us pause. Our fundamental characteristic, that we can tell stories to break the absoluteness of the moment, obviously has the character of a ‘natural force’, deeply rooted within us, that we cannot ‘eradicate’; we might be able to ‘tame’ it, perhaps ‘cultivate’ it, but we cannot stop it. It is an ‘elemental characteristic’ of our thinking, that is: of our brain in the body.

Thought and Verified

The experience that we, the storytellers, can name events and arrange them into relationships—and ultimately without limit—may indeed lead to chaos if the narrated network of relationships is ultimately ‘purely thought’, without any real reference to the ‘real world around us’, but it is also our greatest asset. With it, humans can not only fundamentally free themselves from the apparent absoluteness of the present, but we can also create starting points with the telling of stories, ‘initially just thought relationships’, which we can then concretely ‘verify’ in our everyday lives.

A System of Order

When someone randomly sees another person who looks very different from what they are used to, all sorts of ‘assumptions’ automatically form in each person about what kind of person this might be. If one stops at these assumptions, these wild guesses can ‘populate the head’ and the ‘world in the head’ gets populated with ‘potentially evil people’; eventually, they might simply become ‘evil’. However, if one makes contact with the other, they might find that the person is actually nice, interesting, funny, or the like. The ‘assumptions in the head’ then transform into ‘concrete experiences’ that differ from what was initially thought. ‘Assumptions’ combined with ‘verification’ can thus lead to the formation of ‘reality-near ideas of relationships’. This gives a person the chance to transform their ‘spontaneous network of thought relationships’, which can be wrong—and usually are—into a ‘verified network of relationships’. Since ultimately the thought relationships as a network provide us with a ‘system of order’ in which everyday things are embedded, it appears desirable to work with as many ‘verified thought relationships’ as possible.

[1] The breath of the person opposite me, which for the Greeks connected my counterpart with the life force of the universe, which in turn is also connected with the spirit and the soul…

Hypotheses and Science

Challenge: Methodically Organized Guessing

The ability to think of possible relationships, and to articulate them through language, is innate [1], but the ‘use’ of this ability in everyday life, for example, to match thought relationships with the reality of everyday life, this ‘matching’/’verifying’ is not innate. We can do it, but we don’t have to. Therefore, it is interesting to realize that since the first appearance of Homo sapiens on this planet [2], 99.95% of the time has passed until the establishment of organized modern science about 150 years ago. This can be seen as an indication that the transition from ‘free guessing’ to ‘methodically organized systematic guessing’ must have been anything but easy. And if today still a large part of people—despite schooling and even higher education—[3] tend to lean towards ‘free guessing’ and struggle with organized verification, then there seems to be a not easy threshold that a person must overcome—and must continually overcome—to transition from ‘free’ to ‘methodically organized’ guessing.[4]

Starting Point for Science

The transition from everyday thinking to ‘scientific thinking’ is fluid. The generation of ‘thought relationships’ in conjunction with language, due to our ability of creativity/imagination, is ultimately also the starting point of science. While in everyday thinking we tend to spontaneously and pragmatically ‘verify’ ‘spontaneously thought relationships’, ‘science’ attempts to organize such verifications ‘systematically’ to then accept such ‘positively verified guesses’ as ’empirically verified guesses’ until proven otherwise as ‘conditionally true’. Instead of ‘guesses’, science likes to speak of ‘hypotheses’ or ‘working hypotheses’, but they remain ‘guesses’ through the power of our thinking and through the power of our imagination.[5]

[1] This means that the genetic information underlying the development of our bodies is designed so that our body with its brain is constructed during the growth phase in such a way that we have precisely this ability to ‘think of relationships’. It is interesting again to ask how it is possible that from a single cell about 13 trillion body cells (the approximately 100 trillion bacteria in the gut come ‘from outside’) can develop in such a way that they create the ‘impression of a human’ that we know.

[2] According to current knowledge, about 300,000 years ago in East Africa and North Africa, from where Homo sapiens then explored and conquered the entire world (there were still remnants of other human forms that had been there longer).

[3] I am not aware of representative empirical studies on how many people in a population tend to do this.

[4] Considering that we humans as the life form Homo sapiens only appeared on this planet after about 3.8 billion years, the 300,000 years of Homo sapiens make up roughly 0.008% of the total time since there has been life on planet Earth. Thus, not only are we as Homo sapiens a very late ‘product’ of the life process, but the ability to ‘systematically verify hypotheses’ also appears ‘very late’ in our Homo sapiens life process. Viewed across the entire life span, this ability seems to be extremely valuable, which is indeed true considering the incredible insights we as Homo sapiens have been able to gain with this form of thinking. The question is how we deal with this knowledge. This behavior of using systematically verified knowledge is not innate too.

[5] The ability of ‘imagination’ is not the opposite of ‘knowledge’, but is something completely different. ‘Imagination’ is a trait that ‘shows’ itself the moment we start to think, perhaps even in the fact ‘that’ we think at all. Since we can in principle think about ‘everything’ that is ‘accessible’ to our thinking, imagination is a factor that helps to ‘select’ what we think. In this respect, imagination is pre-posed to thinking.

EMPIRICALLY TRUE?

Author: Gerd Doeben-Henisch

Contact: info@uffmm.org

Start: May 30, 2024

Last change: May 31, 2024

CONTEXT

This text is part of the text ‘Rebooting humanity’

(The German version can be found HERE)

Empirically True?

Hypotheses 2 – 4 …


With Hypothesis 1, a further paradox arises: If the structure of our human body (including its brain) is designed such that there is no direct, permanent one-to-one mapping of the real physical world outside the brain into the internal states of the body (including the brain), how can humans then make and use ’empirically true statements’ about something outside the body or outside the brain?

In everyday life, we can all have the following experiences:

When at least two people are involved and they have no special limitations, we can distinguish the following cases:

  1. There is an object with certain properties that the involved persons can perceive sensibly. Then one person A can say: ‘There is an object X with properties Y.’ And another person B can say: ‘Yes, I agree.’
  2. A certain object X with properties Y cannot be sensibly perceived by the involved persons. Then one person A can say: ‘The object X with properties Y is not there.’ And another person B can say: ‘Yes, I agree.’
  3. There is an object with certain properties that the involved persons can perceive sensibly, which they have never seen before. Then one person A can say: ‘There is an object with properties, which I do not know yet. This is new to me.’ And another person B can then say: ‘Yes, I agree.’
  4. A certain object X with properties Y cannot currently be sensibly perceived by the involved persons, but it was there before. Then one person A can say: ‘The object X with properties Y is no longer there.’ And another person B can say: ‘Yes, I agree.’

Introduction of Hypothesis 2
Case 1 becomes understandable if we assume that the sensory stimuli from object X with properties Y lead to activations in the sense organs, generating a sensory perception that can persist for the duration of object X’s presence.

To identify and classify this temporary perception as an ‘object of type X with properties Y,’ the involved persons must have a ‘memory’ that holds an ‘abstract object of type X with properties Y’ ready.

The ‘realized agreement’ between the perception of object X and the memory of a corresponding abstract object X then allows for the decision that there is a current perception of the abstract object X, whose ‘perceived properties’ ‘sufficiently match’ the ‘abstract properties.’

Important: this agreement occurring in the brain between a perceived object and a remembered object X does not imply anything about the real concrete circumstances that led to the perception of the object.[1]

This situation describes what is meant by Hypothesis 2: Persons can recognize a perceived object as an object of type X with properties Y if they have a memory available at the moment of the current perception.

Important: This Hypothesis 2 refers so far to what happens with and within an individual person. Another person normally cannot know about these processes. Internal processes in persons are — so far — not perceivable by others.[2]

[1] Modern simulation techniques can be so ‘real’ for most people that they make it difficult, if at all possible, to discern the ‘difference’ from the real world based solely on sensory perception. This would be the case where a sensory perception and a remembered abstract object in the brain show a substantial agreement, although there is no ‘real’ empirical object triggering the perception. … The computer itself, which ‘simulates’ something in a manner which looks for an observer ‘like being real’ (or the technical interface through which the computer’s signal reaches human sensors), is nevertheless a ‘real machine’ addressing the human sens organ ‘from the outside’.

[2] Even if modern neuroscientific measuring techniques can make electrical and chemical properties and activities visible, it is — so far — never possible to directly infer the functionalities hidden therein from these activities. Analogously, if one measures the electrical activities of the chips in a computer (which is possible and is done), one can never infer the algorithms currently being executed, even if one knows these algorithms!

Introduction of Hypothesis 3
Case 1 also includes the aspect that person A ‘verbally communicates’ something to person B. Without this verbal communication, B would know nothing about what is happening in A. In everyday life, a person usually perceives more than just one object, possibly many objects simultaneously. Therefore, knowing that a person is referring to a specific object and not one of the many other objects is not self-evident.

In Case 1, it should be stated: A person A says, “There is an object X with properties Y.” And another person B says, “Yes, I agree.”

When a person ‘says’ something that all participants recognize as ‘elements of language L,’ these elements of language L are ‘sounds,’ i.e., sound waves that are generated on one hand by a speaking organ (with a mouth) and received on the other side by an ‘ear.’ Let’s simply call the generating organ ‘actor’ and the receiving organ ‘sensor.’ Then, in verbal communication, a person produces sounds with an actor, and the participant of the communication receives these sounds through his sensor.

It is, of course, clear that the spoken and then also heard sounds of a language L have directly no relation to the internal processes of perception, remembering, and the ‘agreement process’ of perception and memory. However, it can be assumed that there must be ‘internal neural processes’ in the speaker and listener that must correspond to the generated sounds, otherwise the actor could not act.[1] In the case of the sensor, it was already pointed out earlier how stimuli from the outside world lead to activations of neurons, creating a flow of neural signals.

As it was generally assumed that there are neural signal flows and different abstract structures of objects that can be ‘internally’ stored and further processed, something similar must be assumed for the neural encoding of spoken and heard sounds. If one can distinguish elements and certain combinations of elements in the spoken acoustic sound material of a language, it is plausible to assume that these externally identifiable structures are also found in the internal neural realization.

The core idea of Hypothesis 3 can then be formulated as follows: There is a neural counterpart to the acoustically perceivable structure of a language L, which moreover is the ‘active’ part in producing spoken language and in ‘translating’ spoken sounds into the corresponding neural representations.

[1] The human speech organ is a highly complex system in which many systems work together, all of which must be neuronally controlled and coordinated.

Introduction of Hypothesis 4
With Hypothesis 2 (memory, comparison of perception and memory) and Hypothesis 3 (independent sound system of a language), the next Hypothesis 4 arises, suggesting that there must be some ‘relationship’ (mathematically: a mapping) between the sound system of a language L and the memorable objects along with current perception. This mapping allows ‘sounds’ to be connected with ‘objects (including properties)’ and vice versa.

In Case 1, person A has the perception of an object X with properties Y, along with a memory that ‘sufficiently matches,’ and person A says: “There is an object X with properties Y.” Another person B says, “Yes, I agree.”

Given the diversity of the world, constant changes, and the variety of possible sound systems [1], as well as the fact that humans undergo massive growth processes from embryo to continually developing person, it is unlikely that possible relationships between language sounds and perceived and remembered objects are ‘innate.’

This implies that this relationship (mapping) between language sounds and perceived and memorable objects must develop ‘over time,’ often referred to as ‘learning.’ Without certain presets, learning can be very slow; with available presets, it can be much faster. In the case of language learning, a person typically grows up in the presence of other people who generally already practice a language, which can serve as a reference system for growing individuals.

Language learning is certainly a lengthy process that includes not only individual acquisition but also inter-individual coordination among all those who practice a specific language L together.

As a result, learning a language means that not only is the ‘structure of the sound system’ learned, but also the association of elements of the sound system with elements of the perception-memory structure.[2]

In Case 1, therefore, person A must know which sound structure in the application group for language L is used for an object X with properties Y, and so must person B. If A and B have the same ‘relationship knowledge’ of sounds to objects and vice versa, person B can ‘understand’ the verbal expression of A “There is an object X with properties Y” because he also has a perception of this object and remembers an object X that sufficiently matches the perceived object, and he would name this fact in the same way A did. Then person B can say, “Yes, I agree.”

[1] Consider the many thousands of languages that still exist on planet Earth, where different languages can be used in the same living environment. The same ‘perception objects’ can thus be named differently depending on the language.

[2] The study of these matters has a long history with very, very many publications, but there is not yet a universally accepted unified theory.

–!! Not finished yet !!–

Eva Jablonka, Marion J. Lamb, “Traditions and Cumulative Evolution: How a New Lifestyle is Evolving”, 2017 (Review 2022, 2023)

(Last change: July 13, 2023)

(The following text was created from a German text with the support of the software deepL.)

CONTEXT

This is a short review of an article from Eva Jablonka, Marion J. Lamb from 2017 talking about their book „Evolution in vier Dimensionen. Wie Genetik, Epigenetik, Verhalten und Symbole die Geschichte des Lebens prägen (Traditions and cumulative evolution: how a new lifestyle is evolving)“, Stuttgart, S. Hirzel Verlag, published in 2017. There was an earlier English edition (2005) with the title „Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life“, MIT Press. MIT Press comments the 2005 English edition as follows: „Ideas about heredity and evolution are undergoing a revolutionary change. New findings in molecular biology challenge the gene-centered version of Darwinian theory according to which adaptation occurs only through natural selection of chance DNA variations. In Evolution in Four Dimensions, Eva Jablonka and Marion Lamb argue that there is more to heredity than genes. They trace four „dimensions“ in evolution—four inheritance systems that play a role in evolution: genetic, epigenetic (or non-DNA cellular transmission of traits), behavioral, and symbolic (transmission through language and other forms of symbolic communication). These systems, they argue, can all provide variations on which natural selection can act. Evolution in Four Dimensions offers a richer, more complex view of evolution than the gene-based, one-dimensional view held by many today. The new synthesis advanced by Jablonka and Lamb makes clear that induced and acquired changes also play a role in evolution.

The article (in German) was published in pp. 141-146 in: Regina Oehler, Petra Gehring, Volker Mosbrugger (eds.), 2017, Series: Senckenberg Book 78, “Biologie und Ethik: Life as a Project. Ein Funkkolleg Lesebuch”, Stuttgart, E. Schweizerbart’sche Verlagsbuchhandlung (Nägele u. Obermiller) and Senckenberg Nature Research Society.

Main Positions extracted from the Text

Preparing an understanding of the larger text of the book the author has tried to extract the most important assumptions/ hypotheses from the short article:

  1. There is an existing ‘nature’ as a variable quantity with ‘nature-specific’ properties, and
  2. in this nature there are biological populations as a ‘component of nature’, which appear as ‘environment’ for their own members as well as for other populations themselves.
  3. Populations are themselves changeable.
  4. The members of a biological population are able to respond to properties of the surrounding nature – with the other members of the population as a component of the environment (self-reference of a population) – by a specific behavior.
  5. A behavior can be changed in its form as well as related to a specific occasion.
  6. Due to the self-referentiality of a population, a population can therefore interactively change its own behavior
  7. interact variably with the environment through the changed behavior (and thereby change the environment itself to a certain extent).
  8. It turns out that members of a population can recall certain behaviors over longer periods of time depending on environmental characteristics.
  9. Due to differences in lifespan as well as memory, new behaviors can be transferred between generations, allowing for transmission beyond one generation.
  10. Furthermore, it is observed that the effect of genetic information can be meta-genetically (epigenetically) different in the context of reproduction, with these meta-genetic (epigenetic) changes occurring during lifetime. The combination of genetic and epigenetic factors can affect offspring. The effect of such epigenetically influenced changes in actual behavior (phenotype) is not linear.

WORKING DEFINITIONS

For the further discussion, it is helpful to clarify at this point which are the basic (guiding) terms that will shape the further discourse. This will be done in the form of ‘tentative’ definitions. If these should prove to be ‘inappropriate’ in the further course, then one can modify them accordingly.

Three terms seem to play a role as such guiding terms at this point: ‘population’, ‘culture’ and – anticipating the discussion – ‘society’.

Def1: Population

Population here is minimally meant to be such a grouping of biological individuals that form a biological reproductive community (cf. [1])

Def2: Culture

In common usage, the term ‘culture’ is restricted to the population of ‘people’. [2] Here the proposal is made to let ‘culture’ begin where biological populations are capable of minimal tradition formation based on their behavioral space. This expands the scope of the concept of ‘culture’ beyond the population of humans to many other biological populations, but not all.

Def3: Society

The term ‘society’ gains quite different meanings depending on the point of view (of a discipline). Here the term shall be defined minimalistically with reference to the effect that biologically a ‘society’ is minimally present if there is a biological population in which ‘culture’ occurs in a minimal way.

It will be further considered how these processes are to be understood in detail and what this may mean from a philosophical point of view.

COMMENTS

wkp-en: en.wikipedia.org

[1] Population in wkp-en: https://en.wikipedia.org/wiki/Population

[] Culture in wkp-en: https://en.wikipedia.org/wiki/Culture

[] Society in wkp-en: https://en.wikipedia.org/wiki/Society

ABSTRACT MORAL IN A FINITE and CHANGING WORLD

(June 20, 2023 – June 22, 2023)

(This text is a translation from the German blog of the author. The translation is supported by the deepL Software)

CONTEXT

The meaning of and adherence to moral values in the context of everyday actions has always been a source of tension, debate, and tangible conflict.

This text will briefly illuminate why this is so, and why it will probably never be different as long as we humans are the way we are.

FINITE-INFINITE WORLD

In this text it is assumed that the reality in which we ‘find’ ourselves from childhood is a ‘finite’ world. By this is meant that no phenomenon we encounter in this world – ourselves included – is ‘infinite’. In other words, all resources we encounter are ‘finite’. Even ‘solar energy’, which is considered ‘renewable’ in today’s parlance, is ‘finite’, although this finiteness outlasts the lifetimes of many generations of humans.

But this ‘finiteness’ is no contradiction to the fact that our finite world is continuously in a ‘process of change’ fed from many sides. An ‘itself-self-changing finiteness’ is with it, a something which in and in itself somehow ‘points beyond itself’! The ‘roots’ of this ‘immanent changeability’ are to a large extent perhaps still unclear, but the ‘effects’ of the ‘immanent changeability’ indicate that the respective ‘concrete finite’ is not the decisive thing; the ‘respective concrete finite’ is rather a kind of ‘indicator’ for an ‘immanent change cause’ which ‘manifests itself’ by means of concrete finites in change. The ‘forms of concrete manifestations of change’ can therefore perhaps be a kind of ‘expression’ of something that ‘works immanently behind’.

In physics there is the pair of terms ‘energy’ and ‘mass’, the latter as synonym for ‘matter’. Atomic physics and quantum mechanics have taught us that the different ‘manifestations of mass/matter’ can only be a ‘state form of energy’. The everywhere and always assumed ‘energy’ is that ‘enabling factor’, which can ‘manifest’ itself in all the known forms of matter. ‘Changing-matter’ can then be understood as a form of ‘information’ about the ‘enabling energy’.

If one sets what physics has found out so far about ‘energy’ as that form of ‘infinity’ which is accessible to us via the experiential world, then the various ‘manifestations of energy’ in diverse ‘forms of matter’ are forms of concrete finites, which, however, are ultimately not really finite in the context of infinite energy. All known material finites are only ‘transitions’ in a nearly infinite space of possible finites, which is ultimately grounded in ‘infinite energy’. Whether there is another ‘infinity’ ‘beside’ or ‘behind’ or ‘qualitatively again quite different to’ the ‘experienceable infinity’ is thus completely open.”[1]

EVERYDAY EXPERIENCES

Our normal life context is what we now call ‘everyday life’: a bundle of regular processes, often associated with characteristic behavioral roles. This includes the experience of having a ‘finite body’; that ‘processes take time in real terms’; that each process is characterized by its own ‘typical resource consumption’; that ‘all resources are finite’ (although there can be different time scales here (see the example with solar energy)).

But also here: the ’embeddedness’ of all resources and their consumption in a comprehensive variability makes ‘snapshots’ out of all data, which have their ‘truth’ not only ‘in the moment’, but in the ‘totality of the sequence’! In itself ‘small changes’ in the everyday life can, if they last, assume sizes and achieve effects which change a ‘known everyday life’ so far that long known ‘views’ and ‘long practiced behaviors’ are ‘no longer correct’ sometime: in that case the format of one’s own thinking and behavior can come into increasing contradiction with the experiential world. Then the point has come where the immanent infinity ‘manifests itself’ in the everyday finiteness and ‘demonstrates’ to us that the ‘imagined cosmos in our head’ is just not the ‘true cosmos’. In the end this immanent infinity is ‘truer’ than the ‘apparent finiteness’.

HOMO SAPIENS (WE)

Beside the life-free material processes in this finite world there are since approx. 3.5 billion years the manifestations, which we call ‘life’, and very late – quasi ‘just now’ – showed up in the billions of life forms one, which we call ‘Homo sapiens’. That is us.

The today’s knowledge of the ‘way’, which life has ‘taken’ in these 3.5 billion years, was and is only possible, because science has learned to understand the ‘seemingly finite’ as ‘snapshot’ of an ongoing process of change, which shows its ‘truth’ only in the ‘totality of the individual moments’. That we as human beings, as the ‘latecomers’ in this life-creation-process’, have the ability to ‘recognize’ successive ‘moments’ ‘individually’ as well as ‘in sequence’, is due to the special nature of the ‘brain’ in the ‘body’ and the way in which our body ‘interacts’ with the surrounding world. So, we don’t know about the ‘existence of an immanent infinity’ ‘directly’, but only ‘indirectly’ through the ‘processes in the brain’ that can identify, store, process and ‘arrange’ moments in possible sequences in a ‘neuronally programmed way’. So: our brain enables us on the basis of a given neuronal and physical structure to ‘construct’ an ‘image/model’ of a possible immanent infinity, which we assume to ‘represent’ the ‘events around us’ reasonably well.

THINKING

One characteristic attributed to Homo Sapiens is called ‘thinking’; a term which until today is described only vaguely and very variously by different sciences. From another Homo Sapiens we learn about his thinking only by his way of ‘behaving’, and a special case of it is ‘linguistic communication’.

Linguistic communication is characterized by the fact that it basically works with ‘abstract concepts’, to which as such no single object in the real world directly corresponds (‘cup’, ‘house’, ‘dog’, ‘tree’, ‘water’ etc.). Instead, the human brain assigns ‘completely automatically’ (‘unconsciously’!) most different concrete perceptions to one or the other abstract concept in such a way that a human A can agree with a human B whether one assigns this concrete phenomenon there in front to the abstract concept ‘cup’, ‘house’, ‘dog’, ‘tree’, or ‘water’. At some point in everyday life, person A knows which concrete phenomena can be meant when person B asks him whether he has a ‘cup of tea’, or whether the ‘tree’ carries apples etc.

This empirically proven ‘automatic formation’ of abstract concepts by our brain is not only based on a single moment, but these automatic construction processes work with the ‘perceptual sequences’ of finite moments ’embedded in changes’, which the brain itself also automatically ‘creates’. ‘Change as such’ is insofar not a ‘typical object’ of perception, but is the ‘result of a process’ taking place in the brain, which constructs ‘sequences of single perceptions’, and these ‘calculated sequences’ enter as ‘elements’ into the formation of ‘abstract concepts’: a ‘house’ is from this point of view not a ‘static concept’, but a concept, which can comprise many single properties, but which is ‘dynamically generated’ as a ‘concept’, so that ‘new elements’ can be added or ‘existing elements’ may be ‘taken away’ again.

MODEL: WORLD AS A PROCESS

(The words are from the German text)

Although there is no universally accepted comprehensive theory of human thought to date, there are many different models (everyday term for the more correct term ‘theories’) that attempt to approximate important aspects of human thought.

The preceding image shows the outlines of a minimally simple model to our thinking.

This model assumes that the surrounding world – with ourselves as components of that world – is to be understood as a ‘process’ in which, at a chosen ‘point in time’, one can describe in an idealized way all the ‘observable phenomena’ that are important to the observer at that point in time. This description of a ‘section of the world’ is here called ‘situation description’ at time t or simply ‘situation’ at t.

Then one needs a ‘knowledge about possible changes’ of elements of the situation description in the way (simplified): ‘If X is element of situation description at t, then for a subsequent situation at t either X is deleted or replaced by a new X*’. There may be several alternatives for deletion or replacement with different probabilities. Such ‘descriptions of changes’ are here simplified called ‘change rules’.

Additionally, as part of the model, there is a ‘game instruction’ (classically: ‘inference term’), which explains when and how to apply a change rule to a given situation Sit at t in such a way that at the subsequent time t+1, there is a situation Sit* in which the changes have been made that the change rule describes.

Normally, there is more than one change rule that can be applied simultaneously with the others. This is also part of the game instructions.

This minimal model can and must be seen against the background of continuous change.

For this structure of knowledge it is assumed that one can describe ‘situations’, possible changes of such a situation, and that one can have a concept how to apply descriptions of recognized possible changes to a given situation.

With the recognition of an immanent infinity manifested in many concrete finite situations, it is immediately clear that the set of assumed descriptions of change should correspond with the observable changes, otherwise the theory has little practical use. Likewise, of course, it is important that the assumed situation descriptions correspond with the observable world. Fulfilling the correspondence requirements or checking that they are true is anything but trivial.

ABSTRACT – REAL – INDETERMINATE

To these ‘correspondence requirements’ here some additional considerations, in which the view of the everyday perspective comes up.

It is to be noted that a ‘model’ is not the environment itself, but only a ‘symbolic description’ of a section of the environment from the point of view and with the understanding of a human ‘author’! To which properties of the environment a description refers, only the author himself knows, who ‘links’ the chosen ‘symbols’ (text or language) ‘in his head’ with certain properties of the environment, whereby these properties of the environment must also be represented ‘in the head’, quasi ‘knowledge images’ of ‘perception events’, which have been triggered by the environmental properties. These ‘knowledge images in the head’ are ‘real’ for the respective head; compared to the environment, however, they are basically only ‘fictitious’; unless there is currently a connection between current fictitious ‘images in the head’ and the ‘current perceptions’ of ‘environmental events’, which makes the ‘concrete elements of perception’ appear as ‘elements of the fictitious images’. Then the ‘fictitious’ pictures would be ‘fictitious and real’.

Due to the ‘memory’, whose ‘contents’ are more or less ‘unconscious’ in the ‘normal state’, we can however ‘remember’ that certain ‘fictitious pictures’ were once ‘fictitious and real’ in the past. This can lead to a tendency in everyday life to ascribe a ‘presumed reality’ to fictional images that were once ‘real’ in the past, even in the current present. This tendency is probably of high practical importance in everyday life. In many cases these ‘assumptions’ also work. However, this ‘spontaneous-for-real-holding’ can often be off the mark; a common source of error.

The ‘spontaneous-for-real-holding’ can be disadvantageous for many reasons. For example, the fictional images (as inescapably abstract images) may in themselves be only ‘partially appropriate’. The context of the application may have changed. In general, the environment is ‘in flux’: facts that were given yesterday may be different today.

The reasons for the persistent changes are different. Besides such changes, which we could recognize by our experience as an ‘identifiable pattern’, there are also changes, which we could not assign to a pattern yet; these can have a ‘random character’ for us. Finally there are also the different ‘forms of life’, which are basically ‘not determined’ by their system structure in spite of all ‘partial determinateness’ (one can also call this ‘immanent freedom’). The behavior of these life forms can be contrary to all other recognized patterns. Furthermore, life forms behave only partially ‘uniformly’, although everyday structures with their ‘rules of behavior’ – and many other factors – can ‘push’ life forms with their behavior into a certain direction.

If one remembers at this point again the preceding thoughts about the ‘immanent infinity’ and the view that the single, finite moments are only understandable as ‘part of a process’, whose ‘logic’ is not decoded to a large extent until today, then it is clear, that any kind of ‘modeling’ within the comprehensive change processes can only have a preliminary approximation character, especially since it is aggravated by the fact that the human actors are not only ‘passively receiving’, but at the same time always also ‘actively acting’, and thereby they influence the change process by their actions! These human influences result from the same immanent infinity as those which cause all other changes. The people (like the whole life) are thus inevitably real ‘co-creative’ …. with all the responsibilities which result from it.

MORALITY ABOVE ALL

What exactly one has to understand by ‘morality’, one has to read out of many hundreds – or even more – different texts. Every time – and even every region in this world – has developed different versions.

In this text it is assumed that with ‘moral’ such ‘views’ are meant, which should contribute to the fact that an individual person (or a group or …) in questions of the ‘decision’ of the kind “Should I rather do A or B?” should get ‘hints’, how this question can be answered ‘best’.

If one remembers at this point what was said before about that form of thinking which allows ‘prognoses’ (thinking in explicit ‘models’ or ‘theories’), then there should be an ‘evaluation’ of the ‘possible continuations’ independent of a current ‘situation description’ and independent of the possible ‘knowledge of change’. So there must be ‘besides’ the description of a situation as it ‘is’ at least a ‘second level’ (a ‘meta-level’), which can ‘talk about’ the elements of the ‘object-level’ in such a way that e.g. it can be said that an ‘element A’ from the object-level is ‘good’ or ‘bad’ or ‘neutral’ or with a certain gradual ‘tuning’ ‘good’ or ‘bad’ or ‘neutral’ at the meta-level. This can also concern several elements or whole subsets of the object level. This can be done. But for it to be ‘rationally acceptable’, these valuations would have to be linked to ‘some form of motivation’ as to ‘why’ this valuation should be accepted. Without such a ‘motivation of evaluations’ such an evaluation would appear as ‘pure arbitrariness’.

At this point the ‘air’ becomes quite ‘thin’: in the history so far no convincing model for a moral justification became known, which is in the end not dependent from the decision of humans to set certain rules as ‘valid for all’ (family, village, tribe, …). Often the justifications can still be located in the concrete ‘circumstances of life’, just as often the concrete circumstances of life ‘recede into the background’ in the course of time and instead abstract concepts are introduced, which one endows with a ‘normative power’, which elude a more concrete analysis. Rational access is then hardly possible, if at all.

In a time like in the year 2023, in which the available knowledge is sufficient to be able to recognize the interdependencies of literally everybody from everybody, in addition the change dynamics, which can threaten with the components ‘global warming’ the ‘sustainable existence of life on earth’ substantially, ‘abstractly set normative terms’ appear not only ‘out of time’, no, they are highly dangerous, since they can substantially hinder the preservation of life in the further future.

META-MORAL (Philosophy)

The question then arises whether this ‘rational black hole’ of ‘justification-free normative concepts’ marks the end of human thinking or whether thinking should instead just begin here?

Traditionally, ‘philosophy’ understands itself as that attitude of thinking, in which every ‘given’ – including any kind of normative concepts – can be made an ‘object of thinking’. And just the philosophical thinking has produced exactly this result in millennia of struggle: there is no point in thinking, from which all ought/all evaluating can be derived ‘just like that’.

In the space of philosophical thinking, on the meta-moral level, it is possible to ‘thematize’ more and more aspects of our situation as ‘mankind’ in a dynamic environment (with man himself as part of this environment), to ‘name’ them, to place them in a ‘potential relations’, to make ‘thinking experiments’ about ‘possible developments’, but this philosophical meta-moral knowledge is completely transparent and always identifiable. The inferences about why something seems ‘better’ than something else are always ’embedded’, ‘related’. The demands for an ‘autonomous morality’, for an ‘absolute morality’ besides philosophical thinking appear ‘groundless’, ‘arbitrary’, ‘alien’ to the ‘matter’ against this background. A rational justification is not possible.

A ‘rationally unknowable’ may exist, exists even inescapably, but this rationally unknowable is our sheer existence, the actual real occurrence, for which so far there is no rational ‘explanation’, more precisely: not yet. But this is not a ‘free pass’ for irrationality. In ‘irrationality’ everything disappears, even the ‘rationally unrecognizable’, and this belongs to the most important ‘facts’ in the world of life.

COMMENTS

[1] The different forms of ‘infinity’, which have been introduced into mathematics with the works of Georg Cantor and have been intensively further investigated, have nothing to do with the experienceable finiteness/ infinity described in the text: https://en.wikipedia.org/wiki/Georg_Cantor . However, if one wants to ‘describe’ the ‘experience’ of real finiteness/ infinity, then one will possibly want to fall back on descriptive means of mathematics. But it is not a foregone conclusion whether the mathematical concepts ‘harmonize’ with the empirical experience standing to the matter.

Abstract Elements & Glimpses of an Ontology


eJournal: uffmm.org
ISSN 2567-6458, 28.February 2023 – 28.February 2023
, 10:45 CET
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Parts of this text have been translated with www.DeepL.com/Translator (free version), afterwards only minimally edited.

— Not yet finished —

CONTEXT

This post is part of the book project ‘oksimo.R Editor and Simulator for Theories’.

Abstract Elements

Figure 1

The abstract elements introduced so far are still few, but they already allow to delineate a certain ‘abstract space’. Thus there are so far

  1. Abstract elements in current memory (also ‘consciousness’) based on concrete perception,
  2. which then can pass over into stored abstract – and dynamic – elements of potential memory,
  3. further abstract concepts of n.th order in current as well as in potential memory,
  4. Abstract elements in current memory (also ‘consciousness’) based on concrete perception, which function as linguistic elements,
  5. which can then also pass over into stored abstract – and dynamic – elements of potential (linguistic) memory,
  6. likewise abstract linguistic concepts of nth order in actual as well as in potential memory,
  7. abstract relations between abstract linguistic elements and abstract other elements of current as well as potential memory (‘meaning relations’).
  8. linguistic expressions for the description of factual changes and
  9. linguistic expressions for the description of analytic changes.

The generation of abstract linguistic elements thus allows in many ways the description of changes of something given, which (i) is either only ‘described’ as an ‘unconditional’ event or (ii) works with ‘rules of change’, which clearly distinguishes between ‘condition’ and ‘effect’. This second case with change-rules can be related to many varieties of ‘logical inference’. In fact, any known form of ‘logic’ can be ’emulated’ with this general concept of change rules.

This idea, only hinted at here, will be explored in some detail and demonstrated in various applications as we proceed.

Glimpses of an Ontology

Already these few considerations about ‘abstract elements’ show that there are different forms of ‘being’.[1].

In the scheme of FIG. 1, there are those givens in the real external world which can become the trigger of perceptions. However, our brain cannot directly recognize these ‘real givens’, only their ‘effects in the nervous system’: first (i) as ‘perceptual event’, then (ii) as ‘memory construct’ distinguished into (ii.1) ‘current memory (working memory, short-term memory, …) and (ii.2) ‘potential memory’ (long-term memory, various functional classifications, …).”[2]

If one calls the ‘contents’ of perception and current memory ‘conscious’ [3], then the primary form of ‘being’, which we can directly get hold of, would be those ‘conscious contents’, which our brain ‘presents’ to us from all its neuronal calculations. Our ‘current perceptions’ then stand for the ‘reality out there’, although we actually cannot grasp ‘the reality out there’ ‘directly, immediately’, but only ‘mediated, indirectly’.

Insofar as we are ‘aware’ of ‘current contents’ that ‘potential memory’ makes ‘available’ to us (usually called ‘remembering’ in everyday life; as a result, a ‘memory’), we also have some form of ‘primary being’ available, but this primary being need not have any current perceptual counterpart; hence we classify it as ‘only remembered’ or ‘only thought’ or ‘abstract’ without ‘concrete’ perceptual reference.

For the question of the correspondence in content between ‘real givenness’ and ‘perceived givenness’ as well as between ‘perceived givenness’ and ‘remembered givenness’ there are countless findings, all of which indicate that these two relations are not ‘1-to-1’ mappings under the aspect of ‘mapping similarity’. This is due to multiple reasons.

In the case of the perceptual similarity with the triggering real givens, already the interaction between real givens and the respective sense organs plays a role, then the processing of the primary sense data by the sense organ itself as well as by the following processing in the nervous system. The brain works with ‘time slices’, with ‘selection/condensation’ and with ‘interpretation’. The latter results from the ‘echo’ from potential memory that ‘comments’ on current neural events. In addition, different ’emotions’ can influence the perceptual process. [4] The ‘final’ product of transmission, processing, selection, interpretation and emotions is then what we call ‘perceptual content’.

In the case of ‘memory similarity’ the processing of ‘abstracting’ and ‘storing’, the continuous ‘activations’ of memory contents as well as the ‘interactions’ between remembered things indicate that ‘memory contents’ can change significantly in the course of time without the respective person, who is currently remembering, being able to read this from the memory contents themselves. In order to be able to recognize these changes, one needs ‘records’ of preceding points in time (photos, films, protocols, …), which can provide clues to the real circumstances with which one can compare one’s memories.”[5]

As one can see from these considerations, the question of ‘being’ is not a trivial question. Single fragments of perceptions or memories tend to be no 1-to-1 ‘representatives’ of possible real conditions. In addition, there is the high ‘rate of change’ of the real world, not least also by the activities of humans themselves.

COMMENTS

[1] The word ‘being’ is one of the oldest and most popular concepts in philosophy. In the case of European philosophy, the concept of ‘being’ appears in the context of classical Greek philosophy, and spreads through the centuries and millennia throughout Europe and then in those cultures that had/have an exchange of ideas with the European culture. The systematic occupation with the concept ‘being’ the philosophers called and call ‘ontology’. See for this the article ‘Ontology’ in wkp-en: https://en.wikipedia.org/wiki/Ontology .

[2] On the subject of ‘perception’ and ‘memory’ there is a huge literature in various empirical disciplines. The most important may well be ‘biology’, ‘experimental pschology’ and ‘brain science’; these supplemented by philosophical ‘phenomenology’, and then combinations of these such as ‘neuro-psychology’ or ‘neuro-phenomenology’, etc. In addition there are countless other special disciplines such as ‘linguistics’ and ‘neuro-linguistics’.

[3] A question that remains open is how the concept of ‘consciousness’, which is common in everyday life, is to be placed in this context. Like the concept of ‘being’, the concept of ‘consciousness’ has been and still is very prominent in recent European philosophy, but it has also received strong attention in many empirical disciplines; especially in the field of tension between philosophical phenomenology, psychology and brain research, there is a long and intense debate about what is to be understood by ‘consciousness’. Currently (2023) there is no clear, universally accepted outcome of these discussions. Of the many available working hypotheses, the author of this text considers the connection to the empirical models of ‘current memory’ in close connection with the models of ‘perception’ to be the most comprehensible so far. In this context also the concept of the ‘unconscious’ would be easy to explain. For an overview see the entry ‘consciousness’ in wkp-en: https://en.wikipedia.org/wiki/Consciousness

[4] In everyday life we constantly experience that different people perceive the same real events differently, depending on which ‘mood’ they are in, which current needs they have at the moment, which ‘previous knowledge’ they have, and what their real position to the real situation is, to name just a few factors that can play a role.

[5] Classical examples for the lack of quality of memories have always been ‘testimonies’ to certain events. Testimonies almost never agree ‘1-to-1′, at best ‘structurally’, and even in this there can be ‘deviations’ of varying strength.

chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …

eJournal: uffmm.org
ISSN 2567-6458, 15.January 2023 – 16.January 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This texts reflects some ideas following the documented chat No.4.as part of the uffmm.org blog.

Remark: See for a follow-up reflection the text of my post “chatGPT – How drunk do you have to be …” from 15./16.
February 2023.

Chatbots as Interfaces to the Human Knowledge Cloud?

Already at the end of the documented chat No.4 I had the impression, that an interaction with the chatbot chatGBT is somehow different compared to what most people until know have reported or stated in some reflected way about chatGBT.

In a first — and still a little bit vague — conclusion I have stated: “Apropos ‘rational’: that is a point which did surprise me really: as a kind of a summary it came out “that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. This is clearly not what most philosophers today would say. But it follows from the ‘richness of the facts’  which came as a resonance out of this chat. Not that the chatbot would have given this summary in advance as an important characterization of rationality, but as a human counterpart I could summarize all this properties out of the different separated statements [of chatGBT].”

And, indeed, the millions, if not yet billions, of documents in the world wide web are reflecting fragments of knowledge generated by humans which as a whole form a ‘cloud of knowledge’. The reflected echo of the real world through the medium of human brains is distributed in libraries and in the digital space. No individual person as such can make use of it; it is by far too big to be comprehensible.

Meanwhile search-algorithms can help us to make fragments of this ‘human knowledge cloud’ visible, but the search-results so far are ‘transformed’ in a way which is only of very limited use.

My encounter with chatGBT revealed some new glimpses of a possible new paradigm which perhaps wasn’t intended by openai themselves, but which seems now to be in reach: the individual brain has a limited capacity for ‘many documents’, but it has still an incredible ability to ‘transform’ billions of events into complex abstract patterns, inducing complex networks of relations, complex models, even complex theories.

If one looks to a chatbot like chatGBT as an ‘interface’ between a single human person and the ‘human knowledge cloud’, and this by only using ‘everyday language’, then — depending from the quality of the chatbot — this individual human can only with some ideas and questions ‘trigger’ those documents in the huge ‘human knowledge cloud’ which somehow are ‘fitting’ the triggering words. Thus this individual human person is step-wise encountering those fragments of the ‘human knowledge cloud’ which are in ‘resonance’ with his own words. In a bad case this interaction resembles those ‘echo chambers’ of the internet and the real world where people only get back what they put in.

But in a creative case the input of the individual human person can trigger fragments which are somehow confirming, but somehow non-confirming. This can be a trigger onto the the individual human person to generate some new ideas to feed back again the chatbot interface. While the ‘human knowledge cloud’ at a certain moment of time as such is ‘given’ and will not create ‘something new’, an individual person as an ever running learning process (in principle) could generate new aspects and therefore transform the feedbacks out of the ‘human knowledge cloud’ into some ‘new configurations’.

This principle as such is not new. The human brains are working since millions of years with a mechanisms, where the billions of abstract-concrete cognitive entities derived from perceptions and thinking are ‘stored’ in a more or less ‘unconscious space’ called memory, and only if some actual perception or thinking is ‘touching’ some of these stored entities by some ‘kind of similarity’ these will be re-activated and will become part of the actual perceiving and actual thinking. Thus ‘actual new things’ can become merged with ‘past stored things’ forming some new-past construct. This inherent duality of ‘new-past’ is the basis for that important human capability to be able to transcend the almighty presence and escape its clasp.

If we switch to the level of the human population then the ‘Human knowledge cloud’ plays the role of the ‘unconscious memory system’ collecting ‘past things’, and some kinds of chatbots (if they would be good enough) could play the role of the ‘merger mechanism’ bringing together the ‘past’ (given in the ‘Human knowledge cloud’ ) and the ‘presence’ (given in the actual living human person) and it is then up to the human person to generate some useful new-past patterns of thoughts and possible actions.

In case of the chat No.4 the human actor triggered lots of questions and suggestions to get ‘out of the human knowledge cloud’ those fragments which are somehow corresponding to the triggering words. And after about 8000 signs in the chat the chat ended up in these paragraphs:

HumanOK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?

Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.

By itself the chatbot would not have been able to start with such a characterization. He can ‘reach out’ in the ‘Human knowledge cloud’, but the bot algorithm as such has not — as far as research can see at the moment — any kind of ‘creative rational power’ to transform the ‘given knowledge’ into some new ‘helpful’ knowledge. But at the other side, the human persons would not be able too to make use of ‘all the available knowledge’.

In the course of interaction the human questions could ‘drive’ the bot to collect ‘interesting facts’which could then become ‘accepted’ by the bot because they haven become ‘part of the chat’. Thus at the end of the chat the bot could accept that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. A human person ‘helped him’ to state this. This bot algorithm as such does not know anything and he cannot understand anything. Because chatbots — until now — do not possess real emotions, no real mystical experience, no unconscious or conscious human-like cognitive processes, they have no intelligence in the human format.

It is an open question what kind of ‘intelligence’ they have at all. Until know there is great number of ‘definitions’ around. No one is accepted as ‘that’ definition, especially the relationship between the ‘collection of machine intelligence definitions’ and the possible — also not really existing — collection of ‘human intelligence definitions’ is more or less unclear. Thus we are somehow ‘dreaming’ of intelligence, but nobody can really explain what it is …. We could seriously try, if we want …. but who wants it?

COMMON SCIENCE as Sustainable Applied Empirical Theory, besides ENGINEERING, in a SOCIETY

eJournal: uffmm.org
ISSN 2567-6458, 19.Juni 2022 – 30.December 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of the Philosophy of Science theme within the the uffmm.org blog.

This is work in progress:

  1. The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
  2. Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
  3. I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.

INTRODUCTION

In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].

With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.

The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.

CONTENT

  1. Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
  2. Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
  3. True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.
  4. Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
  5. Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
  6. Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
  7. !!! From here all the following chapters have to be re-written !!!
  8. THE LOGIC OF EVERYDAY THINKING. Lets try an Example (Will probably be re-written too)
  9. Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
  10. … more re-writing will probably happen …
  11. Everyday Language: German Example
  12. Everyday Language: English
  13. Natural Logic
  14. Predicate Logic
  15. True Statements
  16. Formal Logic Inference: Preserving Truth
  17. Ordinary Language Inference: Preserving and Creating Truth
  18. Hidden Ontologies: Cognitively Real and Empirically Real
  19. AN INFERENCE IS NOT AUTOMATICALLY A FORECAST
  20. EMPIRICAL THEORY
  21. Side Trip to Wikipedia
  22. SUSTAINABLE EMPIRICAL THEORY
  23. CITIZEN SCIENCE 2.0
  24. … ???

COMMENTS

wkp-en := Englisch Wikipedia

/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */

[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂

[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]

[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.

[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.

[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.

[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.

[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).

[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.

[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science

Citation = “Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1][2]

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

Citation = “New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems.[27][28] Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions,[29] government agencies, and companies.[30][31] The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritizing the ethical and moral development of commercial productsarmamentshealth carepublic infrastructure, and environmental protection.”

[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science

[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.

Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”

[4] Scientific theory, see: wkp-en: https://en.wikipedia.org/wiki/Scientific_theory

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism

[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method

Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]

and

Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]

[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/

[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic

[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)

[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory

[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique

[10] = [5]

[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …

[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.

[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra

[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].

[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.

[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.

[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.

[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022

[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)

[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069

[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability

[25] Sustainable Development in wkp-en: https://en.wikipedia.org/wiki/Sustainable_development

[26] Marope, P.T.M; Chakroun, B.; Holmes, K.P. (2015). Unleashing the Potential: Transforming Technical and Vocational Education and Training (PDF). UNESCO. pp. 9, 23, 25–26. ISBN978-92-3-100091-1.

[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4

[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London

[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342

[30] Döben-Henisch, G.,  Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263

[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.

[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386

[]  Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) in wkp-en, UTL: https://en.wikipedia.org/wiki/Intergovernmental_Science-Policy_Platform_on_Biodiversity_and_Ecosystem_Services

[] IPBES (2019): Global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondizio, J. Settele, S. Díaz, and H. T. Ngo (editors). IPBES secretariat, Bonn, Germany. 1148 pages. https://doi.org/10.5281/zenodo.3831673

[] Michaelis, L. & Lorek, S. (2004). “Consumption and the Environment in Europe: Trends and Futures.” Danish Environmental Protection Agency. Environmental Project No. 904.

[] Pezzey, John C. V.; Michael A., Toman (2002). “The Economics of Sustainability: A Review of Journal Articles” (PDF). . Archived from the original (PDF) on 8 April 2014. Retrieved 8 April 2014.

[] World Business Council for Sustainable Development (WBCSD)  in wkp-en: https://en.wikipedia.org/wiki/World_Business_Council_for_Sustainable_Development

[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club

[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)

[] UN. Secretary-GeneralWorld Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )

/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */

[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).

[] Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior”, in Language, 35, No. 1 (1959), 26-58.(Online: https://chomsky.info/1967____/, accessed: July 21, 2022)

[] Churchman, C. West (December 1967). “Wicked Problems”Management Science. 14 (4): B-141–B-146. doi:10.1287/mnsc.14.4.B141.

[-] Yen-Chia Hsu, Illah Nourbakhsh, “When Human-Computer Interaction Meets Community Citizen Science“,Communications of the ACM, February 2020, Vol. 63 No. 2, Pages 31-34, 10.1145/3376892, https://cacm.acm.org/magazines/2020/2/242344-when-human-computer-interaction-meets-community-citizen-science/fulltext

[] Yen-Chia Hsu, Ting-Hao ‘Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, Alessandro Bozzon, Empowering local communities using artificial intelligence, DOI:https://doi.org/10.1016/j.patter.2022.100449, CellPress, Patterns, VOLUME 3, ISSUE 3, 100449, MARCH 11, 2022

[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8

[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.

[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4

[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341

[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]

[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973

[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2

[] Gudwin, R.R. (2002), Semiotic Synthesis and Semionic Networks, S.E.E.D. Journal (Semiotics, Energy, Evolution, Development), Volume 2, No.2, pp.55-83.

[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings

[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.

[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).

[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)

[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)

[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)

[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.

[] Markus Luczak-Roesch, Kieron O’Hara, Ramine Tinati, Nigel Shadbolt, Socio-technical Computation, CSCW’15 Companion, March 14–18, 2015, Vancouver, BC, Canada, ACM 978-1-4503-2946-0/15/03, http://dx.doi.org/10.1145/2685553.2698991

[] Marcus, G.F., et al. Overregularization in language acquisition. Monographs of the Society for Research in Child Development 57 (1998).

[] Gary Marcus and Ernest Davis, Rebooting AI, Published by Pantheon,
Sep 10, 2019, 288 Pages

[] Gary Marcus, Deep Learning Is Hitting a Wall. What would it take for artificial intelligence to make real progress, March 10, 2022, URL: https://nautil.us/deep-learning-is-hitting-a-wall-14467/ (accessed: July 20, 2022)

[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017

[]  Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */

[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers 37, 335-425 (1993).

[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530

[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)

[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)

[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[] Rittel, Horst W.J.; Webber, Melvin M. (1973). “Dilemmas in a General Theory of Planning” (PDF). Policy Sciences. 4 (2): 155–169. doi:10.1007/bf01405730S2CID 18634229. Archived from the original (PDF) on 30 September 2007. [Reprinted in Cross, N., ed. (1984). Developments in Design Methodology. Chichester, England: John Wiley & Sons. pp. 135–144.]

[] Ritchey, Tom (2013) [2005]. “Wicked Problems: Modelling Social Messes with Morphological Analysis”Acta Morphologica Generalis2 (1). ISSN 2001-2241. Retrieved 7 October 2017.

[] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th US ed., 2021, URL: http://aima.cs.berkeley.edu/index.html (accessed: July 20, 2022)

[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.

[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘

[] Tonkinwise, Cameron (4 April 2015). “Design for Transitions – from and to what?”Academia.edu. Retrieved 9 November 2017.

[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).

[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl,
Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828

[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681

[-] Sarah West, Rachel Pateman, 2017, “How could citizen science support the Sustainable Development Goals?“, SEI Stockholm Environment Institut , 2017, see: https://mediamanager.sei.org/documents/Publications/SEI-2017-PB-citizen-science-sdgs.pdf

[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543

[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422

[] Boder, A. (2006), “Collective intelligence: a keystone in knowledge management”, Journal of Knowledge Management, Vol. 10 No. 1, pp. 81-93. https://doi.org/10.1108/13673270610650120

[] Wikipedia, ‘Weak and strong sustainability’, https://en.wikipedia.org/wiki/Weak_and_strong_sustainability (accessed: July 21, 2022)

[] Florence Maraninchi, Let us Not Put All Our Eggs in One Basket. Towards new research directions in computer Science, CACM Communications of the ACM, September 2022, Vol.65, No.9, pp.35-37, https://dl.acm.org/doi/10.1145/3528088

[] AYA H. KIMURA and ABBY KINCHY, “Citizen Science: Probing the Virtues and Contexts of Participatory Research”, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.

[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a

[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007

[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006

[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011

[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105

[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0

[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691

[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094

[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence

[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285

[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147

[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004

[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663

[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14

[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568

[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273

[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3

[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5

[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80

[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15
DOI 10.1007/s40309-013-0015-4

[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30

[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032

[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427

[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331

[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x

[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77

[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53

[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569

[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology

[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313

[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages

[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#

Comment by Gerd Doeben-Henisch:

[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.

[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821

[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670

[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000

[] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum, (2019) Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
, Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s): https://arxiv.org/pdf/1904.06387.pdf

Abstract: Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1
You can read in the abstract:
“A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce
a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap-
proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined
with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench-
mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX
is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”

[] Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei, (2017), Deep reinforcement learning from human preferences, https://arxiv.org/abs/1706.03741

In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

[] Melanie Mitchell,(2021), Abstraction and Analogy-Making in Artificial
Intelligence
, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing
challenge tasks and evaluation measures in order to make quantifiable and generalizable progress

[] Melanie Mitchell, (2021), Why AI is Harder Than We Think, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”

[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)

In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”

[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600

In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets,
AI FactSheets, and Model Cards have taken a mainly descriptive
approach, providing various details about the system components.
While the above information is essential for product developers
and external experts to assess whether the ML system meets their
requirements, other stakeholders might find it less actionable. In
particular, ML engineers need guidance on how to mitigate po-
tential shortcomings in order to fix bugs or improve the system’s
performance. We propose a documentation artifact that aims to
provide such guidance in a prescriptive way. Our proposal, called
Method Cards, aims to increase the transparency and reproducibil-
ity of ML systems by allowing stakeholders to reproduce the models,
understand the rationale behind their designs, and introduce adap-
tations in an informed way. We showcase our proposal with an
example in small object detection, and demonstrate how Method
Cards can communicate key considerations that help increase the
transparency and reproducibility of the detection model. We fur-
ther highlight avenues for improving the user experience of ML
engineers based on Method Cards.”

[] John H. Miller, (2022),  Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages
Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429

In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”

[] Stefani A. Crabtree, Global Environmental Change, https://doi.org/10.1016/j.gloenvcha.2022.102597

In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment
to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”



HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 16, 2021 (minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 2: Problem & Vision

Context

This text is preceded by the following texts:

Introduction

Before one starts the HMI analysis  some stakeholder  — in our case are the users stakeholder as well as  users in one role —  have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.

Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.

Problem: Mankind on the Planet Earth

In this project  the mankind  on the planet earth is  understood as the primary problem. ‘Mankind’ is seen here  as the  life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:

Outside to Inside

The whole body with the brain is  able to convert continuously body-external  events into internal, neural events. And  the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated.  Most of these neural processes are unconscious, a small part is conscious. Nevertheless  these unconscious and conscious events are  neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes  can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are  here called conscious cognitive substrates or cognitive objects.

Inside to Outside (to Inside)

Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body.  One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x  never will understand anything). But as everyday experience shows nearly every child  starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal  meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens  can transmit its internal encoding of cognitive objects into expressions from his  inside to the outside and thereby another homo sapiens can perceive the produced outside expression and  can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens  is sufficiently similar to the meaning function of  the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of  the sending  homo sapiens.

Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to  the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and  can include highly complex cognitive objects and associated behavior.

Negative Complexity

In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an  individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.

Entangled Humans

From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the  social role and the  membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are  represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules  and following thereby  in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.

If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of  cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons  is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]

Entangled Humans and Negative Complexity

Although entangled human persons can principally enable more complex events, structures,  processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.

Future is not Waiting

The term ‘future‘ is cognitively empty: there exists nowhere an object which can  be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.

Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.

Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more  the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts,  giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.

Because  the processing of the planet, the solar system etc.  is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.

VISION: DEVELOPING TOGETHER POSSIBLE FUTURES

To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:

  1. Find a format
  2. which allows any kinds of people
  3. for any kind of given problem
  4. with at least one vision of a possible improvement
  5. together
  6. to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
  7. For all needed communication any kind of  everyday language should be enough.
  8. As needed this everyday language should be extendable with special expressions.
  9. These considerations about possible paths into the wanted envisioned future state should continuously be supported  by appropriate automatic simulations of such a path.
  10. These simulations should include automatic evaluations based on the given envisioned state.
  11. As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions)  within human planning.

REFERENCES or COMMENTS

[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).

Continuation

Yes, it will happen 🙂 Here.

 

 

 

 

 

 

KOMEGA REQUIREMENTS: Start with a Political Program

Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458, Nov 23-28, 2020
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document is part of the Case Studies section.

CONTENT

Applying the original P-V-Pref Document structure to real cases it became clear that the everyday logic behind the classification of facts into problems [P] or  visions [V] follows a kind of logic hidden in the semantic space of the used expressions. This text explains this hidden logic and what this means for our application.

PDF DOCUMENT

VIDEO [DE]

REMARK

(After first presentations of this video)

(Last change: November 28, 2020)

Confusion by different meanings

While the general view of the whole process is quite clear there arose some hot debate about the everyday situation of the experts (here: citizens)  and the concepts ‘reality [R]‘, ‘vision [V] (imagination of a  state which is not yet real)’, ‘problem [P]‘, and ‘preference [Pref]‘. The members of my zevedi-working group (located at the INM (Frankfurt, Hessen, Germany) as well as a citizen from Dieburg (Hessen, Germany) associated with ‘reality’ also the different kinds of emotions being active in a person and they classified an imagination about a future state also as being real in a concrete person. With such a setting of the concepts it became difficult to motivate the logic illustrated in the video. The video — based on the preceding paper — talks about  a vision v, which can turn a reality r into a problem p, and thereby generating a preference Pref = (v,r). A preference can possibly become a trigger of  some change process.

Looking ahead

Before clarifying this discussion let as have a look ahead to the overall change process which constitutes the heart of the komega-software.  Beginning with October 18, 2020 the idea of this overall change process has been described in this blog. Having some given situation S, the komega software allows the construction of change rules X,  which can be applied onto a given situation S and a builtin simulator [sim] will generate a follow up situation S’ like sim(X,S)=S’ — or short: X(S) = S’ –, a process which can be repeated by using the output S’ as new input for a new cycle. At any time of this cyclic process one can ask whether the actual output S’ can be classified as successful. What is called ‘successful’ depends from the applied criteria. For the komega software at least two criteria are used. The most basic one looks to the ectual end state S’ of the simulation and computes the difference between the occurences of vision statements V in S’ and the occurrences of real statements R having been declared at the beginning as problems P as part of the  start situation S. Ideally the real statements classified as problems should have been disappeared and the vision statements should be present.  If the difference is bigger than some before agreed threshold theta  than the actual end state S’ will be classified as a success, as a goal state in the light of the visions of the preferences, which triggered the change process.

Vision statement

In the context of the whole change process a vision statement is an expression e associated with some everyday language L and which describes in the understanding of the experts a state, which is in our mindes conceivable, imaginable, which is not given as a real state, but can eventually  become a real state in some future. This disctinction presupposes that the expert can distinguish between an idea in his consciousness which is associated with some real state outside his consciousness — associated with a real state — and an idea, which is only inside his consciousness — associated with an imaginated state –.  Looking from a second person to the expert this second person can observe the body of the expert and the world surrounding the body and can speak of the real world and the real body of the expert, but the inner states of the expert are hidden for this second person. Thus from the point of view of this second person there are no real imaginations, no real future states. But the expert can utter some expression e which has a meaning describing some state, which as such is not yet real, but which possibly could become real if one would change the actual reality (the actual everyday life, the actual city …) accordingly.  Thus a vision statement is understood here as an expression e from the everyday language L uttered by some expert having a meaning which can be understood by the other persons describing some imginated state, which is not yet real but could eventually become real in some future ahead.

Creating problems, composing preferences

If at least one vision statement v is known by some experts, then it can happen, that an expert does relate this vision with some given reality r as part of the everyday life or with some absent reality r. Example: if an expert classifies some part of the city as having too much traffic (r1) and he has the vision of changing this into a situation where the traffic is lowered down by X% (v1), then this vision statement v1 can help to understand other experts to interpret the reality r1 in the light of the visiin v1 as a problem v1(r1) = p1. Classifying some reality r1 into a problem p1 is understood in the context of the komega software as making the reality r1 a candidate for a possible change in the sense that r1 should be replaced by v1. Having taken this stance — seeing the reality r1 as a problem p1 by the vision v1 –, than the experts  have created a so-called preference Pref = (v1, p1) saying that the experts are preferring the imaginated possibly future state v1 more than the actual problem p1.

There is the special case, that an expert has uttered a vision statement v but there is no given reality which can be stated in a real statement r. Example: A company thinks that it can produce some vaccine against the  disease Y in two years from now, like  v2=’there is a vaccine against disease Y in yy’. Actually there exists no vaccine, but a disease is attacking the people. Because it is known, that the people can be made immune against the disease by an appropriate vaccine it makes sense to state r2=’There is no vaccine against the disease Y available’. Having the vision v2 this can turn the reality r2 into a problem p2 allowing the preference Pref=(v2,p2).

Triggering actions

If a group of experts generated a vision v — by several and different reaons (including emotions) –, having  associated this with some given eality r, and they decided to generate by v(r)=p  a preference Pr =(v,p),  then it can happen , that these experts decide to start a change process beginning now with the given problem p and ending up with a situation in some future where the problem p disappeared and the vision has become real.

Summing up

The komega software allows the planning and testing of change processes  if the acting experts have at least one preference Pref based on at least one  vision statement v and at least one real statement r.

BITS OF PHILOSOPHY

Shows the framework for the used concepts from the point of view of philosophy
Philosophical point of view

The above video (in German, DE) and the following  lengthy remark after the video how to understand the basic concepts vision statement [v],  real statement [r], problem statement [p], as well as preference [Pref] presuppose both a certain kind of philosophy. This philosophical point of view is outlined above in a simple drawing.

Basically there is a real human person (an actor) with a real brain embedded in some everyday world. The person can perceive parts of the every day world at every point of time. The most important reference point  in time is the actual moment called NOW.

Inside the brain the human person can generate some cognitive structure triggered by perception, by  memory and by some thinking.  Having learned some everyday language L the human person can map the cognitive structure into an expression E associated with the language L. If the cognitive structure correlates with some real situation outside the body then the meaning of the expression E is classified as being a real statement, here named E1.  But the brain can generate also cognitive structures and mapping these in expressions E without being actually correlated with some real situation outside. Such a statement is here called a vision statement, here named E2. A vision statement can eventually become correlated with some real situation outside in some future. In that case the vision statement transforms into a real statement E2, while the before mentioned real statement E1 can lose its correlation with a real situation.

FURTHER DISCUSSIONS

For further discussions have a look to this page too.

 

Extended Concept for Meaning Based Inferences. Version 1

ISSN 2567-6458, 30.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

TruthTheoryExtended-v1

The Simulator as a Learning Artificial Actor [LAA]. Version 1

ISSN 2567-6458, 23.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

Abstract

The analysis of the main application scenario revealed that classical
logical inference concepts are insufficient for the assistance of human ac-
tors during shared planning. It turned out that the simulator has to be
understood as a real learning artificial actor which has to gain the required
knowledge during the process.

PDF DOCUMENT

LearningArtificialActor-v1 (last change: Aug 23, 2020)

CASE STUDIES

eJournal: uffmm.org
ISSN 2567-6458, 4.May  – 16.March   2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this section several case studies will  be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept  now can be associated with any kind of embedded algorithmic intelligence [EAI]  (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.

COLLECTION OF PAPERS

There exists only a loosely  order  between the  different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.

 

JANUARY 2021 – OCTOBER 2021

  1. HMI Analysis for the CM:MI paradigm. Part 1 (Febr. 25, 2021)(Last change: March 16, 2021)
  2. HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision (Febr. 27, 2021)
  3. HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories (March 2, 2021)
  4. HMI Analysis for the CM:MI paradigm. Part 4. Tool Based Development with Testing and Gaming (March 3-4, 2021, 16:15h)

APRIL 2020 – JANUARY 2021

  1. From Men to Philosophy, to Empirical Sciences, to Real Systems. A Conceptual Network. (Last Change Nov 8, 2020)
  2. FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
  3. CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology) (July 28, 2020)
  4. A first GCA open research project [GCA-OR No.1].  This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
  5. Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors  where  certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly  not  separated from society (April-9, 2020): case1-population-start-part0-v1
  6. Bootstrapping some Citizens. This  paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
  7. Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own  (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
  8. The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
  9. CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model (June 19.-30., 2020)
  10. KOMEGA REQUIREMENTS No.1. Basic Application Scenario (last change: August 11, 2020)
  11. KOMEGA REQUIREMENTS No.2. Actor Story Overview (last change: August 12, 2020)
  12. KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S (last change: August 12, 2020)
  13. The Simulator as a Learning Artificial Actor [LAA]. Version 1 (last change: August 23, 2020)
  14. KOMEGA REQUIREMENTS No.4, Version 1 (last change: August 26, 2020)
  15. KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario (last change: August 28, 2020)
  16. Extended Concept for Meaning Based Inferences. Version 1 (last change: 30.April 2020)
  17. Extended Concept for Meaning Based Inferences – Part 2. Version 1 (last change: 1.September 2020)
  18. Extended Concept for Meaning Based Inferences – Part 2. Version 2 (last change: 2.September 2020)
  19. Actor Epistemology and Semiotics. Version 1 (last change: 3.September 2020)
  20. KOMEGA REQUIREMENTS No.4, Version 3. Basic Application Scenario (last change: 4.September 2020)
  21. KOMEGA REQUIREMENTS No.4, Version 4. Basic Application Scenario (last change: 10.September 2020)
  22. KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario (last change: 13.September 2020)
  23. KOMEGA REQUIREMENTS: From the minimal to the basic Version. An Overview (last change: Oct 18, 2020)
  24. KOMEGA REQUIREMENTS: Basic Version with optional on-demand Computations (last change: Nov 15,2020)
  25. KOMEGA REQUIREMENTS:Interactive Simulations (last change: Nov 12,2020)
  26. KOMEGA REQUIREMENTS: Multi-Group Management (last change: December 13, 2020)
  27. KOMEGA-REQUIREMENTS: Start with a Political Program. (last change: November 28, 2020)
  28. OKSIMO SW: Minimal Basic Requirements (last change: January 8, 2021)

 

 

ACI – TWO DIFFERENT READINGS

eJournal: uffmm.org
ISSN 2567-6458, 11.-12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Change: May-17, 2019 (Some Corrections, ACI associations)
Change: May-20, 2019 (Reframing ACI with AAI)
CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

HCI – HMI – AAI ==> ACI ?

Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.

The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons.  The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over  the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to  our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many  processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end  I got the feeling that the more specialized expressions like   Actor-Cognition Interaction (ACI) or  Augmented Collective Intelligence (ACI) can indeed help  to stress certain  special properties  better than the more abstract AAI acronym, but using structures like ACI  within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case

HISTORY

To understand this oscillation between AAI and  ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.

It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of  the evolution a lot.  A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive  tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.

Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other;  these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.

Only in the 20th century mankind was able to built a cognitive tool called computer which could show   capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.

While the original intention for the development of computers was to improve the cognitive tools with the intend  to support human beings one can  get today  the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.

As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are

What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.

To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.

COGNITION EMBEDDED IN MATTER

Actor-Cognition Interaction (ACI): A simple outline of the whole paradigm
Cognition within the Actor-Actor Interaction (AAI)  paradigm: A simple outline of the whole paradigm

The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated  by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.

SOLVING A PROBLEM

Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.

COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE

If we talk about cognition then we think usually about cognitive processes in an individual person.  But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge  the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.

On account of this   logic of the individual internal communication with the individual cognition  an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the   interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable)  from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.

If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.

From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be  absolutely necessary.

ACI ANALYSIS PROCESS

Here it will be assumed that there exists a group of ACI experts which can supervise  other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).

For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.

In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.

ACTOR STORY (AS)

The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*)  with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X  —> S.

SEVERAL TESTS

With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether  a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.

If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.

The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet  the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.

COLLECTIVE MEMORY

If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a     building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a  public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.

 

 

AAI-THEORY V2 – BLUEPRINT: Bottom-up

eJournal: uffmm.org,
ISSN 2567-6458, 27.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 28.February 2019 (Several corrections)

CONTEXT

An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.

BOTTOM-UP: THE GENERAL BLUEPRINT

Outine of the process how to generate an AS
Figure 1: Outline of the process how to generate an AS with a bottom-up approach

As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.

This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.

In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually  need different durations to come into effect. These effects can happen  as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a  situation  generates   ‘successors’ to the actual situation. A sequence of situations generated by applied changes is  usually called a ‘simulation‘.

If one allows the interaction between real actors with a simulation by associating  a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).

One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.

Making new experiences with a  simulation allows a continuous improvement of the model and its change rules.

Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.

EPISTEMOLOGY OF CONCEPTS

Epistemology of concepts used in an AAI Analysis rprocess
Fig.2: Epistemology of concepts used in an AAI Analysis process

As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy   usage of concepts to describe state of affairs.

The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.

While the literature is mainly dealing with  more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.

The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain,  which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.

The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more   ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts)  to deal with ‘many’  concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can  specify  possible different ‘usages’   of  the concept ‘chair’.

Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is  not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.

From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there  (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a  nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly   ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies  inherently an ‘open set’ of ‘possible’ concrete  exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage.  Because we can ‘in principle’ check the real extensions of these abstract concepts   in everyday life as long there is the ‘freedom’ to do  such checks,  we are losing the ‘truth’ of our concepts and thereby the basis for a  realistic cooperation, if this ‘freedom of checking’ is not possible.

If some incoming perception is ‘not yet known’,  because nothing given in the unconsciousness does ‘match’,  it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.

THE DIMENSION OF MEANING

In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.

Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.

These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including  the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.

As far as conceptual structures are related via meaning relations to language expressions then  a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures   can exist  (especially with children) without an available  meaning relation.

When language expressions are used within a communicative act then  their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of  concepts alone does not automatically follow, that a  concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or  ‘not true‘.

THE DIMENSION OF TIME

The dimension of time based on past experience and combinatoric thinking
Fig.3: The dimension of time based on past experience and combinatoric thinking

The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.

Evidently the unconscious brain is able do exactly this:  (i) it can arrange the different structures under certain conditions in a ‘timely order’;  (ii)  it can detect ‘differences‘ between timely succeeding structures;  the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can  can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.

Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is  depending from the degree of dependencies within the  ‘generating  criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).