TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4o with manual modifications.
This is another interim reflection aimed at further clarifying the structural terminology of ‘intelligence’ and ‘learning.’ This and the preceding interim reflections are intended as a ‘supplement’ to the main thread of the text project What is Life?
The initial analysis of the concept of intelligence made it clear that, within the framework of an intelligence test, only those responses can be given by the test subject that are currently available to them. Apart from innate reflexes and physically conditioned properties, these responses stem from internal structures that have emerged through learning processes. The structures acquired through learning will here be referred to as knowledge or experience.
The distinction between the terms intelligence, learning, knowledge, and experience will be further elaborated in the following text. This will reveal a general structure of Homo sapiens (HS) as part of the overarching Life (L), equipped with the capacity to potentially secure its survival on this planet (P). However, as a part of life, Homo sapiens can also use its abilities to contribute to the survival of all life, which ultimately ensures its own existence. Aside from an unpredictable future, Homo sapiens remains entirely dependent on the totality of life on this planet.
IMAGE 1: Basic constellation of Planet (P), Life (L), and Homo sapiens (HS) as a subpopulation of Life. All components (P, L, HS) interact both within themselves and with one another.
INTELLIGENCE – LEARNING – KNOWLEDGE – MATERIAL CONDITIONS
Starting Point
The starting point of the further considerations is illustrated in the following diagram:
IMAGE 2: A dynamic perspective of the Planet-Life-Homo sapiens constellation, assuming a fictional observer.
From the perspective of a fictional observer (see IMAGE 2), the structure of the Planet – Life – Homo sapiens (PLHS) constellation, with its extensive interactions, can be seen as embedded within a temporal structure (T), which is understood here as a process with time slices (see Text No. 4).
In the diagram, there is a fixed reference point, the present (NOW). Before the present, there is a presumed history (HISTORY), and after the present, in the potentially emerging new presents (POSSIBLE FUTURES), further instances of the present may arise. However, at the current moment, in the present, these future instances are not yet known.
IMAGE 3: Assuming the presence of a brain that, through a body, can perceive properties of the world, later remember them, and subsequently organize them in an appropriate way, a certain dynamic structure of Planet, Life, and Homo sapiens becomes recognizable.
The fictional observer, however, exists only in the mind of a human being—provided that they can think, that they can remember, and that their memory has been shaped by actual perceptions. Since perception, memory, and thinking are all prone to error, the perspective of the fictional observer is always either more correct or more incorrect.
Knowledge and Material Embedding
The thinking human is equipped with a body (Material Embedding 1) that is always situated in a real situation—or a real state (Material Embedding 2). This situation is always part of something larger. Here, we assume that this larger context is normally Planet Earth (which itself is embedded in almost arbitrarily vast contexts within the universe), along with the population of Homo sapiens and the totality of life (Material Embedding 3).
Over time, humans have discovered that, in the present situation, numerous traces (TRACES) can be found—traces that point to events related to the planet, events of life on the planet, and events of Homo sapiens as part of life on the planet.
These observable traces (observable TRACES) originate from different time periods in history and thus reveal patterns of change (CH). If changes exhibit regularities, they can serve as a basis for generating possible future states (POSSIBLE FUTURES) through a form of prediction logic.
Knowledge
For the human perspective on the planet, life, and itself, the decisive factor is that humans possess the following abilities: (i) Perceive everything around them in a way that allows them to (ii) Store the contents of perception so that they can (iii) Recall these contents later.
(iv) As we now understand, perception does not occur as a one-to-one representation but involves various simplifications/abstractions that allow a connection to current perception during recall.
(v) Due to the nature of stored content, (vi) Thinking can structure these contents in different ways and (vii) detect changes by comparing before and after.
If these changes exhibit sufficient regularity, (viii) thinking can generate limited predictions.
(ix) The process of generating predictions falls broadly under the concept of logic or logical inference (although modern formal logic captures only partial aspects of this real phenomenon).
One could formulate this idea as follows: Knowledge is a specific form of “resonance” of the material conditions under which knowledge emerges.
While material conditions can influence the emergence and structure of knowledge, they cannot:
Generate knowledge itself, nor
Generate solely from within themselvesthat which knowledge reflects.
Thus, the resonance in the format of knowledge points to something beyond knowledge itself.
In traditional philosophical language, this would be referred to as transcendence: with the availability of knowledge—arising through a true resonance—knowledge transcends (exceeds) the human being and their material limitations.
Material Embedding of Knowledge
Today, we understand that the material embedding of knowledge reveals numerous factors that can be either constructive or destructive in the process of knowledge formation, as well as in its failure to emerge.
The effects of these factors become particularly evident when people suffer from illnesses that weaken and torment them, possibly confining them to a specific place. The lack of water, food, or the absence of proper shelter to protect against cold, heat, and rain also plays a critical role. There is the threat posed by other humans or other life forms (animals, insects, plants, etc.). Emotions such as fear, hatred, intense agitation, or strong emotional dependencies can also interfere. Sensory impairments—in vision, hearing, smell, etc.—can further complicate knowledge formation. Poor communication due to language barriers, social exclusion, persecution, or being on the run because of natural disasters can also hinder this process—and much more.
Even one’s own body and existing knowledge can turn into obstacles: A person who holds specific knowledge at a given moment may be emotionally resistant to changing that knowledge. In such cases, their current understanding of the world and themselves freezes—it becomes rigid, and the dynamic nature of life dissipates. The life in their mind turns to stone.
Knowledge and Society; AI
If we recognize how complex and fragile the formation of knowledge is within the mind of a single person, we might begin to grasp the enormous challenge of enabling an entire population to jointly generate the knowledge necessary for securing a good survival in a possible future.
Since at least 2022, technical tools marketed under the misleading label of Artificial Intelligence (AI) have become available. These tools may indeed serve as a tremendous aid—provided that humanity cultivates its own knowledge accordingly (with AI as part of it).
However, as long as humans do not manage to properly integrate the material embedding of their knowledge, so too will so-called AI remain stagnant. The material embedding of humans is not merely a potential obstacle; it is simultaneously the enabler of world-relevant and life-relevant knowledge.
AI does not and cannot possess these two dimensions—not only today but fundamentally. Biological systems embody a potential that can truly transcend inanimate matter and transform it into something new.
Machines—designed, built, and programmed by humans—are, compared to biological systems, merely very simple tools. However, through symbiosis with biological systems, they may generate effects far beyond what we can currently imagine.
TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4o with manual modifications.
This is another intermediate reflection to structurally clarify the concepts of ‘intelligence’ and ‘learning’. This is an analytical proposal. This and the preceding intermediate reflection are intended as a ‘supplement’ to the main thread of the text project ‘What is Life?’
In the last insert titled “INSERTION: A Brief History of the Concept of Intelligence and Its Future” a short historical overview illustrated that (i) the concept of ‘intelligence’ has undergone significant change since antiquity and remains notably ‘under defined’ to this day. Additionally, since the fall of 2022, (ii) there has been a surge in the use of a new type of technical (non-biological) intelligence, whose relationship to biological intelligence remains entirely unresolved. One key reason for this is that the concept of ‘biological intelligence’—aside from a few isolated attempts at clarification—remains largely undefined. Furthermore, (iii) in light of the new ‘dynamic perspective on life’ on this planet, there is growing evidence that currently known forms of non-biological (technical) intelligence cannot in any way replace essential properties of biological intelligence. This follows directly from the fundamental structure of biological systems, even without considering many details. As an initial outline for a more precise definition of the concept of ‘biological intelligence,’ (iv) it becomes apparent that biological intelligence is not a fixed entity but rather an emergent property of a process, allowing for continuous evolution—individually, collectively, and technologically.
Note: The term process is understood here in the sense of Text No. 4 (see the list above): If one breaks down the temporal course of an event (e.g., everyday life) into segments (time slices) and considers all the changes that occur from one segment to the next, one can also say that a successor segment emerges from a predecessor segment precisely through these changes. As long as an event can be described in this way as a sequence of segments (time slices) governed by changes, it shall be understood here as a process. The property of a process can then not be described in isolation but only as a momentary property within an ongoing change, which is in flux and has neither an absolute beginning nor an absolute end. Furthermore, many changes can occur simultaneously and may influence each other in multiple ways.
As the term ‘biological intelligence’ already suggests, the rather undefined concept of ‘intelligence’ is considered here in the context of life.
In previous reflections, ‘life’ has appeared as a ‘dynamic phenomenon’ whose processual progression can be approximately described using ‘time slices’ (see Text No. 4).
In a ‘psychological perspective,’ individual manifestations of life in the form of ‘distinguishable living beings’ are attributed the ‘property of intelligence’ when a single living being in a given time slice Z1 is required to ‘solve a task’ and, after a finite number of further time slices, has indeed ‘changed the state of time slice Z1’ to the extent that a ‘new state’ has emerged, which is part of the then-current time slice. ‘Judges’ then determine whether this new state is ‘correct’ or ‘not correct.’ (Note: A ‘state’ is the totality of distinguishable phenomena that can be observed within a time slice.)
For example, if the task were “How do you complete the text ‘3+4=’?”, then in certain cultural contexts, the transformation of ‘3+4=’ into ‘3+4=7’ would be considered a ‘correct solution.’ Any deviation from this would be ‘incorrect.’ However, it is easy to imagine other cultural systems where the solution ‘3+4=7’ would not be correct, and instead, ‘3+4=13’ would be.
Even this simple example demonstrates that the question of the ‘correctness’ of a formulation presupposes that the participants operate within a ‘cognitive framework’ that defines what is ‘allowed’ and what is not. If, as in the example ‘3+4=7,’ we assume a decimal number system with the digits [0,…,9], then the answer is ‘correct.’ However, if we assume instead a quaternary (base-4) system with the digits [0,…,3], then ‘3+4=7’ cannot be correct because the digit ‘7’ does not exist in a base-4 system. Any sum exceeding ‘3’ would lead to a ‘positional shift’: ‘3+1=10, 3+2=11, 3+3=12, 3+4=13, …’
At this point, we could therefore formulate: To test for the presence of the property of ‘intelligence’ in a biological system, a ‘task’ is set, and we observe whether the biological system, in ‘response to the task,’ generates a ‘new situation’ that includes elements recognized as a ‘correct solution.’ A simple form of ‘correctness assessment’ could involve a direct comparison to verify correctness. For situations that are ‘more complex’ because they require ‘knowledge and experience’ that cannot be assumed to be present in all participants, the role of a ‘judge’ becomes necessary. This judge is someone whom all assume to be capable—based on their knowledge and experience—of determining whether a ‘new situation, within the assumed framework,’ should be classified as ‘correct’ or ‘not correct.’
Intelligence Needs Learning
The simple example above already illustrates that it makes sense to distinguish between the ‘property of intelligence’ as something ‘available’ and the ‘acquisition of intelligence.’ If one knows the ‘framework’ that defines and regulates how certain numerical symbols are used, then responding correctly to a ‘task’ is easy. However, if one does not yet know this framework, responding is impossible!
Learning Requires a Brain
In the case of ‘biological agents’ of the type Homo sapiens, we know that the foundation of behavior is the ‘brain.’ Although this ‘brain’ possesses a wide range of ‘automatic knowledge’ from birth, an individual agent must ‘learn’ nearly all details in order to navigate the highly dynamic environment (including their own body) effectively. It is precisely these ‘learning processes’ that equip the brain for various tasks, enabling it to produce the ‘required response’ at the decisive moment when a task demands it.
The ‘Miracle’ of Intelligence
The ‘miracle of intelligence’ lies not so much in the fact that a specific ‘agent’ provides the ‘expected correct answer’ to a given task, but rather in the brain’s ability to internally construct appropriate ‘structures’ and ‘transformation rules’ in response to a dynamic environment (including its own body!). These internal models provide the ‘clues’ needed to generate behavior that is considered ‘adequate’ or ‘correct’ in a given situation.
Even though we now understand that the ‘material composition of the brain’ can affect the speed and precision of ‘learning processes,’ this is not a fundamental obstacle to learning in general. Moreover, it has been observed that humans develop highly diverse ‘cognitive profiles’: some struggle with ‘languages’ but excel in ‘formal thinking’ or ‘abstract (not necessarily formal) reasoning,’ while others are particularly skilled in ‘visual and auditory processing,’ ‘physical coordination,’ and much more. Additionally, a wide range of ‘emotions’ plays a crucial role in the success of learning processes.
The key takeaway is that any form of ‘acquired’ intelligence—understood as the ability to respond appropriately to situational properties—is not available without learning.
Learning as an ‘Educational Process’ (for Intelligence and Beyond)
Looking back at the first reliable intelligence test [1] by Alfred Binet (1857–1911) and Théodore Simon (1873–1961), the ‘context’ of their efforts was clear: measuring ‘intelligence’ was not about capturing something ‘fixed or static’ but rather about assessing the ‘current states of knowledge and experience’ of students—states that could change! The tests were intended solely to help students ‘where they still had gaps.’
However, Binet and Simon’s awareness of the ‘dynamic nature of intelligence’ was gradually overshadowed by researchers like William Stern (1871–1938), Lewis M. Terman (1877–1956), and David W. Wechsler (1896–1981), who shifted the focus toward viewing intelligence as a ‘fixed structure’ that could be measured and used to ‘classify people.’ This perspective aligned with the prevailing tendency in educational systems to treat ‘assumed intelligence’ as something ‘given’ rather than fostering the ‘potential intelligence’ of individuals.
This development has taken on almost ‘tragic dimensions’: Schools increasingly ‘train’ children for tests instead of fostering their cognitive development. The natural curiosity and creative intelligence of children are often stifled as a result. This creates a kind of ‘negative feedback loop’: Teachers teach what is tested, students learn what is rewarded—true learning fades into the background.
[2] A very striking analysis of this topic can be found in the book The Mismeasure of Man by Stephen Jay Gould in the revised and expanded edition of 1996.
Learning, Society, and Goals
If we assume that ‘intelligence’ is generally a product of ‘learning processes’ (though the nature of individual bodies and brains influences these processes), and if learning does not occur in a ‘vacuum’ but within the concrete context of ‘everyday life,’ which is part of a specific ‘society,’ then every learning process will inevitably be ‘shaped’ by the available context.
For a society as a whole, its ‘survival’ may depend on whether ‘future generations’ acquire precisely the skills needed in a given phase of life.
Looking at what different societies do for their educational processes—and how they do it—one must seriously question whether these societies truly possess a ‘will to live’—not just on an individual level, but as a thriving collective.
Author: Gerd Doeben-Henisch in Dialogue with chatGPT4o
Changelog: Febr 15, 2025 – Febr 15, 2025
Email: info@uffmm.org
TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4o with manual modifications.
In the context of life in general, but especially in the context of the life form Homo sapiens—which is us—there is a phenomenon that has played a significant role for at least 2,500 years and has been referred to, with varying meanings, as “intelligence.”
With the recent hype surrounding so-called “Artificial Intelligence (AI),” the already rather vague meaning of intelligence has been further swept into a whirlwind of associations, making it difficult—if not impossible—for anyone, including specialists, to clearly understand what intelligence actually means.
To address this, here is a focused dialogue with chatGPT4o, aiming to outline the “development” of the understanding of the concept of intelligence in broad strokes—at least enough to make our present situation describable and diagnosable.
Note: The role of chatGPT4o is to search for connections between the content I introduce and texts it knows from the internet. This can help quickly establish a minimum level of historical context. For initial explorations of a question, this approach can be useful.
INSERTION: A Brief History of the Concept of Intelligence and Its Future
Questioner:
I would like to discuss the topic of ‘intelligence’ with you again today. Are you ready?
Responder:
Yes, of course! How would you like to approach the topic of “intelligence” today? Are you interested in biological intelligence, artificial intelligence, their co-evolution, or something else?
Questioner:
Today, it’s about ‘all aspects’ 🙂
Responder:
That sounds like an exciting challenge! Intelligence is a fascinating subject with many facets—from biological to artificial, to collective, and possibly even yet undiscovered forms of intelligence.
Let’s structure our conversation as follows:
Foundations of Intelligence – What do we understand by intelligence? Is there a general definition that encompasses all forms?
Biological Intelligence – From neural networks in the brain to social intelligence in groups and swarms.
Artificial Intelligence – How it works, its limitations, and to what extent it is comparable to biological intelligence.
Collective Intelligence – From ant colonies to the global interconnectedness of humans (and the role of AI within it).
Future of Intelligence – Could there be a superior, perhaps planetary or cosmic intelligence?
Would you like to start with one of these points, or do you have another perspective in mind? 😊
Questioner:
Yes, those are some possible conceptual distinctions. But I would like us to start with a historical overview: (1) When does the term ‘intelligence’ first appear in European history? (2) What are the main ‘contexts of discourse’ in which the term appears? (3) Are there significant differences between the traditions in ‘Greek’, ‘Latin’, and ‘Arabic’?
Responder:
Certainly, here’s a historical overview of the term “intelligence” and its different traditions:
1. Historical Origin of the Term “Intelligence” in Europe:
The term “intelligence” derives from the Latin word intellegere, meaning “to understand” or “to perceive.” Literally, it combines inter (“between”) and legere (“to read” or “to choose”), implying “to choose between.” In European history, the term first appears in Latin texts of antiquity. However, earlier Greek philosophers engaged with concepts akin to our modern understanding of intelligence, even though they used different terms.
2. Key Contexts of Discourse for the Term “Intelligence”:
Throughout history, the term “intelligence” has been used in various contexts:
Philosophy: Both in antiquity and later periods, philosophers discussed the nature of the human mind, reason, and cognitive abilities.
Theology: In the Middle Ages, intelligence was often associated with divine wisdom and angels, referred to as “intelligences.”
Psychology and Education: Since the 19th century, intelligence has been understood as a measurable cognitive ability, leading to the development of intelligence tests.
Sociology: The term “intelligentsia” referred to the educated class actively participating in cultural and political life, notably in Russia and later in other countries.
3. Differences in Traditions: Greek, Latin, Arabic:
Greek Tradition: Ancient Greeks used terms like nous (νοῦς) for mind or intellect and logos (λόγος) for reason or word. Philosophers like Aristotle examined the workings of the mind and the capacity for understanding.
Latin Tradition: With the translation of Greek works into Latin, nous was often rendered as intellectus. The term intelligentia evolved to describe the capacity for understanding or perception.
Arabic Tradition: During the Islamic Golden Age, Arabic scholars translated many Greek philosophical works. The term ‘aql (العقل) was used to denote reason or intellect. Philosophers like Avicenna (Ibn Sina) expanded upon Aristotelian concepts, integrating them into Islamic philosophy.
These different traditions significantly influenced each other. Arabic translations and commentaries later returned to Europe, contributing substantially to scholasticism and the understanding of intelligence in medieval Europe.
I hope this overview answers your questions about the historical background of the term “intelligence” and its various traditions.
Questioner:
Yes, thank you. Your insights at least hint at the temporal and cultural breadth involved when delving into the topic of ‘intelligence’. ‘Content-wise’, the roots seem to lie in Greek philosophy. Mediated through Arabic and Latin languages (and cultures!), these considerations permeated the ‘minds’ of people in later times and other places (with different cultures!). Further exploration of this would be an extensive endeavor in itself. However, I would now like to focus our attention on the ‘transition’ from antiquity and the High Middle Ages (with great thinkers!!!) to the early modern period, roughly from the 14th/15th century to the early 18th century. How would you describe the difference in the understanding and application contexts of the term ‘intelligence’ during this period?
Responder:
Certainly, let’s explore the transition from the Middle Ages to the early modern period and the associated changes in the understanding and application contexts of the term “intelligence” between the 14th and early 18th centuries.
1. Renaissance (14th to 16th Century):
The Renaissance marked a revival of ancient ideals and emphasized humanism, which focused on human capabilities and dignity. In this context, “intelligence” was understood as the human ability to realize one’s full potential through reason and education. Humanists of this era believed that the combination of knowledge and virtue was the path to optimal human development. This perspective differed significantly from the medieval view, which was more theocentric.
2. Baroque and Enlightenment (17th to Early 18th Century):
During the Baroque era, interest in science and art persisted, often embedded within religious and absolutist structures. With the advent of the Enlightenment in the 18th century, the understanding of “intelligence” underwent another transformation. Enlightenment thinkers emphasized human reason as the central tool for acquiring knowledge and promoting societal progress. Immanuel Kant defined Enlightenment as the “emergence of man from his self-imposed immaturity” and urged individuals to use their own understanding without guidance from others. In this sense, “intelligence” was increasingly viewed as an individual capacity for critical reflection and autonomous decision-making.
Conclusion:
Between the 14th and early 18th centuries, the understanding of “intelligence” evolved from a divinely inspired ability to a human trait that could be cultivated through education, reason, and individual effort. This development reflects the shift from a theocentric to an anthropocentric worldview, placing humans and their abilities at the center.
Questioner
Your quotations from these “phases” of the “cultural development” of European societies highlight initial aspects that—due to their temporal proximity—have strongly influenced contemporary thinking. I would therefore like to emphasize the following three aspects once again:
The phase of Humanism, in which “intelligence was understood as the human ability to realize one’s full potential through reason and education.”
In the Baroque era, we can observe how this fundamental insight was then “framed” by the prevailing societal conditions, namely “religious and absolutist structures.” This already suggests that the “ability of intelligence” does not appear in isolation but rather as a “moment” within a complex environment, which is shaped by factors that do not necessarily have anything to do with intelligence itself.
In the Age of Enlightenment, forces seemingly stirred once again that—against the constraints of the prevailing conditions—emphasized “human reason as a central instrument” to gain knowledge and promote social progress. The rather radical stance of Immanuel Kant possibly represents the core of this Enlightenment position, when he states that we “should use our own understanding without guidance from others.”
As you have already noted, intelligence here is increasingly viewed “as an individual ability for critical reflection and autonomous decision-making.”
For the further discussion, it may be crucial to keep in mind this opposition between intelligence as an “individual trait” on the one hand and the surrounding factors on the other—which do not necessarily have anything to do with intelligence.
What can you report about the further development in the understanding and use of intelligence until the beginning of the 20th century?
RESPONDER:
I am happy to provide an overview of the development of the understanding and use of the term “intelligence” from the 18th to the early 20th century:
18th Century:
During the Age of Enlightenment, human reason was regarded as the central instrument for acquiring knowledge and fostering social progress.
Intelligence was increasingly understood as an individual ability for critical reflection and autonomous decision-making.
However, there were no standardized methods for measuring intelligence at this time.
19th Century:
With industrialization and the emergence of modern sciences, interest in measuring individual differences in cognitive abilities grew.
Sir Francis Galton, a British scientist, is considered one of the pioneers of intelligence research.
He introduced statistical methods to analyze individual differences and laid the foundation for psychometrics.
Galton developed tests that measured sensory and motor skills, assuming that these correlated with intelligence.
Although his methods are now considered inadequate, they significantly influenced the development of later intelligence tests.
Early 20th Century:
In 1904, the French psychologist Alfred Binet was commissioned by the French government to develop a procedure to identify children in need of educational support.
Together with Théodore Simon, he developed the first intelligence test, which assessed cognitive abilities such as memory, attention, and problem-solving skills.
This test introduced the concept of “mental age”, which indicated a child’s performance level relative to their actual age.
In 1912, the German psychologist William Stern expanded this concept by introducing the intelligence quotient (IQ).
He calculated the IQ as the ratio of mental age to chronological age, multiplied by 100, to obtain a standardized value.
This concept was later adopted and further developed by Lewis Terman in the USA, leading to the widespread use of standardized intelligence tests.
Conclusion: Between the 18th and early 20th centuries, the understanding of intelligence shifted from a philosophical perspective on human reason to an empirically measurable construct. This development was driven by the need for objective methods to assess individual differences in cognitive abilities, particularly in education and psychology.
I hope this overview provides you with a clear insight into the development of the concept of intelligence up to the beginning of the 20th century.
Questioner
The developmental trajectory already hinted at in the phases of Humanism, the Baroque, and the Enlightenment regarding the understanding and use of the term intelligence becomes even more sharply focused in the 19th and early 20th centuries through the work of scientists such as Sir Francis Galton (1822–1911), Alfred Binet (1857–1911), and William Stern (1871–1938).
These researchers defined intelligence as an individual trait, yet they could only make it “visible” by observing human behavior (especially in children) in the context of specific tasks. Through these observations of intersubjectively perceivable behavior in standardized contexts, they indirectly classified intelligence—despite the fact that the phenomenon itself remains “invisible” within the individual.
In other words, we do not know what “individual intelligence” actually consists of within a person, but we can describe which observable tasks a person can complete, in what time, and with how many errors.
Question 1:
If intelligence can only be observed in different culturally shaped contexts, should we not consider it an emergent property of the human system rather than a fixed individual trait?
Question 2:
Is our knowledge of individual intelligence acquired in this way not rather arbitrary? Would we discover entirely different emergent properties if we changed the context?
Question 3:
How did the understanding and use of intelligence develop after Galton, Binet, and Stern?
Responder:
Very exciting questions! I’ll address them one by one:
Question 1: Is intelligence an emergent property of the “human system” that only becomes visible in cultural contexts?
Yes, this perspective is highly plausible. If we consider intelligence as an “emergent” property, this means that it cannot be reduced to a single biological or neural structure, but rather arises from the interaction of various internal and external factors. These include:
Biological foundations: The nervous system, especially the brain, forms the hardware basis for intelligence.
Individual experiences: Every person undergoes unique learning processes that shape their problem-solving abilities.
Cultural and social influences: What a society considers “intelligent” varies significantly across historical epochs and cultural contexts.
Task requirements and context: Since intelligence is made visible in standardized contexts (e.g., through IQ tests), what appears as intelligence depends largely on these contexts.
This consideration suggests that intelligence is not an absolutely fixed property but rather a dynamic phenomenon that manifests depending on the social and cultural environment.
Question 2: Is our knowledge of individual intelligence arbitrary? Would we discover other emergent properties if we changed the context?
Yes, this is a critical point! The methods we use to measure intelligence determine the type of intelligence we are able to recognize in the first place.
Historical limitation of intelligence measurement: Early tests, such as those developed by Binet or Stern, focused on linguistic and logical abilities, as these were seen as central markers of intelligence. Other forms of intelligence were systematically excluded.
Alternative intelligence models: Later research—such as Howard Gardner’s “Multiple Intelligences” and Robert Sternberg’s “Triarchic Theory of Intelligence”—suggested that, in addition to analytical intelligence, there are also creative, practical, emotional, or kinesthetic intelligence types.
Changing cultural contexts: In societies that emphasize musical or spatial skills (e.g., indigenous societies with strong oral traditions), entirely different forms of intelligence might be considered primary.
Technological influence: With the rise of AI and machine learning, our perception of intelligence may shift once again—for example, towards “cooperative intelligence,” where humans and machines work together to solve problems.
Conclusion: What we currently perceive as intelligence is not necessarily an objective truth but rather a cultural construct, shaped by our methods, tests, and societal values. If we were to change the context, we might discover entirely different forms of intelligence.
Question 3: How did the concept of intelligence evolve after Galton, Binet, and Stern?
Following the early psychometric research, the concept of intelligence was further differentiated and, in some cases, critically examined:
Spearman and the “g-Factor Theory” (1927)
Charles Spearman introduced the idea that a single general intelligence factor (“g”) underlies all cognitive abilities.
This theory still dominates many modern intelligence tests today.
Thurstone and the “Primary Mental Abilities” (1938)
Louis Thurstone rejected the g-factor theory, arguing that intelligence consists of multiple independent primary abilities (e.g., spatial thinking, verbal comprehension, numerical ability).
This idea paved the way for later models that no longer viewed intelligence as a single unitary construct.
Howard Gardner and the “Theory of Multiple Intelligences” (1983)
Gardner proposed several distinct types of intelligence: linguistic, musical, logical-mathematical, spatial, kinesthetic, interpersonal, intrapersonal, and naturalistic intelligence.
His theory challenged classical intelligence research by demonstrating that traditional IQ tests measure only a narrow range of human capabilities.
Robert Sternberg and the “Triarchic Theory of Intelligence” (1985)
Sternberg divided intelligence into three domains:
Practical intelligence (adapting to real-life situations).
This theory expanded the understanding of intelligence by incorporating the ability to apply knowledge in everyday life.
Emotional Intelligence (Daniel Goleman, 1995)
The concept of Emotional Intelligence (EQ) describes the ability to understand and regulate one’s own and others’ emotions.
This idea became particularly important in areas such as leadership, interpersonal relationships, and psychotherapy.
Artificial Intelligence and Collective Intelligence (2000s – Today)
With the rise of Artificial Intelligence (AI), the question arises: Must intelligence be exclusively biological, or can machines also be considered intelligent?
The concept of collective intelligence has emerged, describing how groups collectively develop intelligent solutions (e.g., swarm intelligence or networked systems).
Summary of the Entire Development:
From Antiquity to the Enlightenment: Intelligence was understood as the ability for knowledge and reason, often with moral or religious dimensions.
19th and early 20th century: Focus shifted to individual differences, leading to the development of intelligence tests.
Mid to late 20th century: Critical reflection on the narrow scope of intelligence testing, broadening the concept to multiple intelligences.
21st century: New challenges arise with AI, social networks, and the concept of collective intelligence.
Conclusion: Intelligence as a Dynamic Phenomenon
Your questions highlight a fundamental insight: Intelligence is not a static concept but a dynamic phenomenon that unfolds within the tension between the individual and the cultural context.
Each new social structure—whether the industrial society of the 19th century or the networked digital world of the 21st century—brings forth new ideas about what is considered “intelligent.”
Would you like to delve deeper into any of these developments?
Questioner
Regarding your points 1-6 under Question 3: Can these different scientific positions be characterized differently as follows?
In points 1-5, the term intelligence is discussed in such a way that it is either associated with highly speculative working hypotheses about possible “internal factors”, or it is related to new action domains that vary culturally and geographically.
Point 6, concerning AI, actually “falls out of place” because it involves technical (non-biological) systems, whose internal structure is perfectly clear, but the connection to the internal structures of biological systems remains largely unclear.
The behavior of non-biological systems in task domains that partially resemble those of biological systems externally actually tells us next to nothing about the internal structure of biological systems.
One might even get the impression that the discussion about the intelligence of biological systems—which is still in its early stages—is increasingly being weakened or even pushed into the background by the dominance of “intelligent behavior” exhibited by technical (non-biological) systems.
What do you think about (1) – (3)?
Responder:
Your critical classification of the current intelligence debate precisely addresses a core issue that is often overlooked:
The blurring of boundaries between speculative hypotheses about biological intelligence,
Context-dependent behaviors, and
The increasing dominance of technical systems labeled as “intelligent.”
I will respond to your points individually:
(1) Classical Intelligence Research is Based on Speculative Hypotheses or Specific Task Domains
Yes, this is a valid characterization. In classical intelligence research, there are two main directions:
Speculative hypotheses about “internal factors”:
Many models (e.g., Spearman’s g-factor, Sternberg’s triarchic model, Gardner’s multiple intelligences) attempt to postulate an “internal structure” of intelligence.
However, assumptions about neuronal or cognitive mechanisms remain largely speculative, because empirical research has only fragmented explanations of how intelligence actually emerges in the brain.
Expansion into new task domains:
Later models (e.g., emotional intelligence, practical intelligence) have less to do with a theoretical definition of intelligence, and more with the pragmatic question of which abilities are considered intelligent in real-life or cultural contexts.
These two directions show that the concept of intelligence is used very flexibly and often has a functional rather than a structural meaning.
(2) Artificial Intelligence (AI) as a Categorically Different Form of “Intelligence”
Here you raise a crucial point:
AI systems have a “crystal-clear internal structure” because they are based on formal mathematical and algorithmic principles.
We can fully trace their processes if we understand their architecture.
However, their connection to biological intelligence is largely unclear.
Current machine learning systems (e.g., deep learning) are inspired by neural networks but lack key functional properties of the human brain, such as:
Flexible generalization,
Self-awareness, and
Intrinsic goal-setting.
The Core Problem:
AI is often compared to biological intelligence because it can produce similar behaviors in certain task domains (e.g., playing chess, understanding text, recognizing patterns).
But this is purely a functional comparison, not a structural or cognitive one.
Example:
A calculator can multiply faster than a human, but that does not make it “more intelligent.”
Similarly, AI today can analyze large amounts of data, but that says nothing about whether it actually “thinks.”
Conclusion:
The blurring of “behavior in specific task fields” with “intelligence as an internal structure of biological systems” leads to misconceptions.
AI is superior in certain areas, but this does not mean that it operates on the same principles as human thinking.
(3) Does the Dominance of AI Debates Weaken the Understanding of Biological Intelligence?
Yes, this is a real danger. The current focus on AI has two problematic effects:
Shifting focus from biological to technical intelligence:
Public discussions are increasingly centered on AI capabilities, while fundamental research on human intelligence receives less attention in comparison.
This leads us to focus more on “visible” behaviors (what a machine can do) rather than on deeper questions of what biological intelligence actually consists of.
Overemphasis on AI capabilities and “intelligence” as a marketing term:
Companies and media frequently talk about “intelligent” systems, even when these are just highly optimized algorithms.
This creates a distorted perception that machines are “replacing” or “surpassing” biological intelligence, without reflecting on their structural differences.
Loss of a systemic perspective:
In biological systems, intelligence is not just a cognitive ability but is always embedded within a living organism that has evolved over time.
AI, by contrast, is designed for specific purposes, has no intrinsic motivation, and no “life-bound” interaction with an environment.
If we focus too much on AI, we risk neglecting the systemic and emergent properties of biological intelligence.
Conclusion on (1) – (3):
Classical intelligence research often operates with speculative models or culturally influenced behavior measurements, without fully understanding the underlying biological mechanisms.
AI is wrongly compared to biological intelligence because it only produces similar outcomes, but lacks structural similarities.
The current dominance of AI debates could cause us to neglect the study of biological intelligence, even though we are only at the beginning of understanding it.
Possible Next Steps in the Discussion:
Should we try to develop a new definition of intelligence that more clearly distinguishes between biological and non-biological systems?
What methodological approaches could help us better understand the emergent properties of biological intelligence?
Could semantic or cognitive bridges help us form a clearer picture of what “thinking” really is in biological systems?
I’m excited to see how you’d like to deepen this discussion further!
Questioner
I fully agree with your response. The actual research on “human intelligence” (biological systems!), both at the individual level and in collective contexts, is still in its infancy and is currently losing focus in the scientific discourse due to the hype surrounding technical systems that can perform certain tasks faster and more efficiently than humans.
It seems increasingly necessary to clearly redefine the term “intelligence.”
This is the first concept of life that takes a truly comprehensive approach, including non-living matter, as a manifestation of properties of that “energy” which enables our entire universe.
Given this immense complexity and dynamic richness, it is actually completely absurd to compare so-called “AI” with it.
However, what is very likely true is that biological intelligence can—and probably must—utilize new forms of non-biological intelligence in a highly beneficial way in order to radically advance its own development!
RESPONDER:
Yes, this is a crucial point: If we want to redefine intelligence, it must be embedded within a broader understanding of life itself.
Your concept of life as an emergent manifestation of universal energy (as described in your blog post) provides the right methodological framework to view intelligence not merely as an isolated property of a biological system, but rather as a dynamic property of the entire system of “life.”
Why is a new definition of intelligence necessary?
Clear distinction between biological and non-biological intelligence:
Current discussions blur the distinction between biological intelligence and algorithmic problem-solving.
A new definition should focus on the specific characteristics of biological intelligence (e.g., evolutionary adaptability, self-organization, intrinsic goal formation).
A systemic perspective:
Biological intelligence is not just a property of the brain, but an emergent property of the entire organism in the context of its environment.
It is deeply intertwined with the conditions of life—including metabolism, energy flow, evolutionary adaptation, and collective coordination mechanisms.
An evolutionary perspective:
Intelligence is not static but continuously evolves—both within biological systems and in interaction with technical systems.
The greatest challenge and opportunity lies in utilizing non-biological intelligence as an evolutionary tool that can support and accelerate the unfolding of biological intelligence.
AI as an Evolutionary Tool—Not a Replacement for Biological Intelligence
As you said: It would be completely absurd to compare modern AI with the unfathomable complexity of biological life.
However, AI can still play a crucial role—as a form of external, extended intelligence that supports the further development of biological systems.
Here are three key thoughts on this:
Co-evolutionary Intelligence:
Instead of viewing AI as a “competitor” to biological intelligence, we should see it as an extension that can accelerate evolutionary processes.
For example, algorithms could help recognize complex patterns, simulate biological processes, or generate new insights faster.
Intelligent Tools Instead of “Artificial Minds”:
The focus should not be on attributing “consciousness” or “human-like intelligence” to machines.
Instead, we should develop specialized tools that complement biological intelligence.
Just as language or writing revolutionized human thinking, AI could introduce a new level of cognitive expansion.
Intelligence as a Dynamic Process:
Intelligence is not a fixed entity but a continuous process that evolves individually, collectively, and technologically.
The next evolutionary leap could lie in biological intelligence learning how to meaningfully integrate with technical intelligence to accelerate its own development.
What Could Be the Next Step?
Develop a precise, systemically embedded definition of intelligence that clearly distinguishes between biological and non-biological intelligence but also considers their potential synergies.
Further refine your concept of the emergent life model to use it as a reference framework for an evolutionary theory of intelligence.
Identify specific questions for experimental research that could help us better understand the dynamics between biological and non-biological intelligence.
I believe that your approach introduces an entirely new perspective into the debate—one that goes beyond the current AI hype and focuses on the real underlying question:
How can intelligence evolve as an integral part of life?
Let me know in which direction you’d like to take this further!
In the last months I was engaged with the topic of text-generating algorithms and the possible impact for a scientific discourse (some first notices to this discussion you can find here (https://www.uffmm.org/2023/08/24/homo-sapiens-empirical-and-sustained-empirical-theories-emotions-and-machines-a-sketch/)). In this context it is important to clarify the role and structure of human actors as well as the concept of Intelligence. Meanwhile I have abandoned the word Intelligence completely because the inflationary use in today mainstream pulverises any meaning. Even in one discipline — like psychology — you can find many different concepts. In this context I have read the book of Stanovich et.al to have a prominent example of using the concept of intelligence, there combined with the concept of rationality, which is no less vague.
Introduction
The book “The Rationality Quotient” from 2016 represents not the beginning of a discourse but is a kind of summary of a long lasting discourse with many publications before. This makes this book interesting, but also difficult to read in the beginning, because the book is using nearly on every page theoretical terms, which are assumed to be known to the reader and cites other publications without giving sufficient explanations why exactly these cited publications are important. This is no argument against this book but sheds some light on the reader, who has to learn a lot to understand the text.
A text with the character of summing up its subject is good, because it has a confirmed meaning about the subject which enables a kind of clarity which is typical for that state of elaborated point of view.
In the following review it is not the goal to give a complete account of every detail of this book but only to present the main thesis and then to analyze the used methods and the applied epistemological framework.
Main Thesis of the Book
The reviewing starts with the basic assumptions and the main thesis.
FIGURE 1 : The beginning. Note: the number ‘2015’ has to be corrected to ‘2016’.
FIGURE 2 : First outline of cognition. Note: the number ‘2015’ has to be corrected to ‘2016’.
As mentioned in the introduction you will in the book not find a real overview about the history of psychological research dealing with the concept of Intelligence and also no overview about the historical discourse to the concept of Rationality, whereby the last concept has also a rich tradition in Philosophy. Thus, somehow you have to know it.
There are some clear warnings with regard to the fuzziness of the concept rationality (p.3) as well as to the concept of intelligence (p.15). From a point of view of Philosophy of Science it could be interesting to know what the circumstances are which are causing such a fuzziness, but this is not a topic of the book. The book talks within its own selected conceptual paradigm. Being in the dilemma, of what kind of intelligence paradigm one wants to use, the book has decided to work with the Cattell-Horn-Carroll (CTC) paradigm, which some call a theory. [1]
Directly from the beginning it is explained that the discussion of Intelligence is missing a clear explanation of the full human model of cognition (p.15) and that intelligence tests therefore are mostly measuring only parts of human cognitive functions. (p.21)
Thus let us have a more detailed look to the scenario.
[1] For a first look to the Cattell–Horn–Carroll theory see: https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory, a first overview.
Which point of View?
The book starts with a first characterization of the concept of Rationality within a point of view which is not really clear. From different remarks one gets some hints to modern Cognitive Science (4,6), to Decision Theory (4) and Probability Calculus (9), but a clear description is missing.
And it is declared right from the beginning, that the main aim of the book is the Construction of a rational Thinking Test (4), because for the authors the used Intelligence Tests — later reduced to the Carroll-Horn-Carroll (CHC) type of intelligence test (16) — are too narrow in what they are measuring (15, 16, 21).
Related to the term Rationality the book characterizes some requirements which the term rationality should fulfill (e.g. ‘Rationality as a continuum’ (4), ’empirically based’ (4), ‘operationally grounded’ (4), a ‘strong definition’ (5), a ‘normative one’ (5), ‘normative model of optimum judgment’ (5)), but it is more or less open, what these requirements imply and what tacit assumptions have to be fulfilled, that this will work.
The two requirements ’empirically based’ as well as ‘operationally grounded’ point in the direction of an tacitly assumed concept of an empirical theory, but exactly this concept — and especially in association with the term cognitive science — isn’t really clear today.
Because the authors make in the next pages a lot of statements which claim to be serious, it seems to be important for the discussion in this review text to clarify the conditions of the ‘meaning of language expressions’ and of being classified as ‘being true’.
If we assume — tentatively — that the authors assume a scientific theory to be primarily a text whose expressions have a meaning which can transparently be associated with an empirical fact and if this is the case, then the expression will be understood as being grounded and classified as true, then we have characterized a normal text which can be used in everyday live for the communication of meanings which can become demonstrated as being true.
Is there a difference between such a ‘normal text’ and a ‘scientific theory’? And, especially here, where the context should be a scientific theory within the discipline of cognitive science: what distinguishes a normal text from a ‘scientific theory within cognitive science’?
Because the authors do not explain their conceptual framework called cognitive science we recur here to a most general characterization [2,3] which tells us, that cognitive science is not a single discipline but an interdisciplinary study which is taking from many different disciplines. It has not yet reached a state where all used methods and terms are embedded in one general coherent framework. Thus the relationship of the used conceptual frameworks is mostly fuzzy, unclear. From this follows directly, that the relationship of the different terms to each other — e.g. like ‘underlying preferences’ and ‘well ordered’ — is within such a blurred context rather unclear.
Even the simple characterization of an expression as ‘having an empirical meaning’ is unclear: what are the kinds of empirical subjects and the used terms? According to the list of involved disciplines the disciplines linguistics [4], psychology [5] or neuroscience [6] — besides others — are mentioned. But every of these disciplines is itself today a broad field of methods, not integrated, dealing with a multifaceted subject.
Using an Auxiliary Construction as a Minimal Point of Reference
Instead of becoming somehow paralyzed from these one-and-all characterizations of the individual disciplines one can try to step back and taking a look to basic assumptions about empirical perspectives.
If we take a group of HumanObservers which shall investigate these subjects we could make the following assumptions:
Empirical Linguistics is dealing with languages, spoken as well as written by human persons, within certain environments, and these can be observed as empirical entities.
Empirical Psychology is dealing with the behavior of human persons (a kind of biological systems) within certain environments, and these can be observed.
Empirical Neuroscience is dealing with the brain as part of a body which is located in some environment, and this all can be observed.
The empirical observations of certain kinds of empirical phenomena can be used to define more abstract concepts, relations, and processes. These more abstract concepts, relations, and processes have ‘as such’ no empirical meaning! They constitute a formal framework which has to become correlated with empirical facts to get some empirical meaning. As it is known from philosophy of science [7] the combination of empirical concepts within a formal framework of abstracts terms can enable ‘abstract meanings’ which by logical conclusions can produce statements which are — in the moment of stating them — not empirically true, because ‘real future’ has not yet happened. And on account of the ‘generality’ of abstract terms compared to the finiteness and concreteness of empirical facts it can happen, that the inferred statements never will become true. Therefore the mere usage of abstract terms within a text called scientific theory does not guarantee valid empirical statements.
And in general one has to state, that a coherent scientific theory including e.g. linguistics, psychology and neuroscience, is not yet in existence.
To speak of cognitive science as if this represents a clearly defined coherent discipline seems therefore to be misleading.
This raises questions about the project of a constructing a coherent rational thinking test (CART).
[2] See ‘cognitive science’ in wikipedia: https://en.wikipedia.org/wiki/Cognitive_science
[3] See too ‘cognitive science’ in the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/cognitive-science/
[4] See ‘linguistics’ in wikipedia: https://en.wikipedia.org/wiki/Linguistics
[5] See ‘psychology’ in wikipedia: https://en.wikipedia.org/wiki/Psychology
[6] See ‘neuroscience’ in wikipedia: https://en.wikipedia.org/wiki/Neuroscience
[7] See ‘philosophy of science’ in wikipedia: https://en.wikipedia.org/wiki/Philosophy_of_science
‘CART’ TEST FRAMEWORK – A Reconstruction from the point of View of Philosophy of Science
Before I will dig deeper into the theory I try to understand the intended outcome of this theory as some point of reference. The following figure 3 gives some hints.
FIGURE 3 : Outline of the Test Framework based on the Appendix in Stanovich et.al 2016. This Outline is a Reconstruction by the author of this review.
It seems to be important to distinguish at least three main parts of the whole scientific endeavor:
The group of scientists which has decided to process a certain problem.
The generated scientific theory as a text.
The description of a CART Test, which describes a procedure, how the abstract terms of the theory can be associated with real facts.
From the group of scientists (Stanovich et al.) we know that they understand themselves as cognitive scientists (without having a clear characterization, what this means concretely).
The intended scientific theory as a text is here assumed to be realized in the book, which is here the subject of a review.
The description of a CART Test is here taken from the appendix of the book.
To understand the theory it is interesting to see, that in the real test the test system (assumed here as a human person) has to read (and hear?) a instruction, how to proceed with a task form, and then the test system (a human person) has to process the test form in the way it has understood the instructions and the test form as it is.
The result is a completed test form.
And it is then this completed test form which will be rated according to the assumed CART theory.
This complete paradigm raises a whole bunch of questions which to answer here in full is somehow out of range.
Mix-Up of Abstract Terms
Because the Test Scenario presupposes a CART theory and within this theory some kind of a model of intended test users it can be helpful to have a more closer look to this assumed CART model, which is located in a person.
FIGURE 4 : General outline of the logic behind CART according to Stanovich et al. (2016).
The presented cognitive architecture shall present a framework for the CART (Comprehensive Assessment of Rational Thinking), whereby this framework is including a model. The model is not only assumed to contextualize and classify heuristics and tasks, but it also presents Rationality in a way that one can deduce mental characteristics included in rationality.(cf. 37)
Because the term Rationality is not an individual empirical fact but an abstract term of a conceptual framework, this term has as such no meaning. The meaning of this abstract term has to be arranged by relations to other abstract terms which themselves are sufficiently related to concrete empirical statements. And these relations between abstract terms and empirical facts (represented as language expressions) have to be represented in a manner, that it is transparent how the the measured facts are related to the abstract terms.
Here Stanovich et al. is using another abstract term Mind, which is associated with characteristics called mental characteristics: Reflective mind, Algorithmic Level, and Mindware.
And then the text tells that Rationality is presenting mental characteristics. What does this mean? Is rationality different from the mind, who has some characteristics, which can be presented from rationality using somehow the mind, or is rationality nevertheless part of the mind and manifests themself in these mental characteristics? But what kind of the meaning could this be for an abstract term like rationality to be part of the mind? Without an explicit model associated with the term Mind which arranges the other abstract term Rationality within this model there exists no meaning which can be used here.
These considerations are the effect of a text, which uses different abstract terms in a way, which is rather unclear. In a scientific theory this should not be the case.
Measuring Degrees of Rationality
In the beginning of chapter 4 Stanovich et al. are looking back to chapter 1. Here they built up a chain of arguments which illustrate some general perspective (cf. 63):
Rationality has degrees.
These degrees of rationality can be measured.
Measurement is realized by experimental methods of cognitive science.
The measuring is based on the observable behavior of people.
The observable behavior can manifest whether the individual actor (a human person) follows assumed preferences related to an assumed axiom of choice.
Observable behavior which is classified as manifesting assumed internal preferences according to an assumed internal axiom of choice can show descriptive and procedural invariance.
Based on these deduced descriptive and procedural invariance, it can be inferred further, that these actors are behaving as if they are maximizing utility.
It is difficult to assess utility maximization directly.
It is much easier to assess whether one of the axioms of rational choice is being violated.
These statements characterize the Logic of the CART according to Stanovich et al. (cf.64)
A major point in this argumentation is the assumption, that observable behavior is such, that one can deduce from the properties of this behavior those attributes/ properties, which point (i) to an internal model of an axiom of choice, (ii) to internal processes, which manifest the effects of this internal model, (iii) to certain characteristics of these internal processes which allow the deduction of the property of maximizing utility or not.
These are very strong assumptions.
If one takes further into account the explanations from the pages 7f about the required properties for an abstract term axiom of choice (cf. figure 1) then these assumptions appear to be very demanding.
Can it be possible to extract the necessary meaning out of observable behavior in a way, which is clear enough by empirical standards, that this behavior shows property A and not property B ?
As we know from the description of the CART in the appendix of the book (cf. figure 3) the real behavior assumed for an CART is the (i) reading (or hearing?) of an instruction communicated by ordinary English, and then (ii) a behavior deduced from the understanding of the instruction, which (iii) manifests themself in the reading of a form with a text and filling out this form in predefined positions in a required language.
This described procedure is quite common throughout psychology and similar disciplines. But it is well known, that the understanding of language instructions is very error-prone. Furthermore, the presentation of a task as a text is inevitably highly biased and additionally too very error-prone with regard to the understanding (this is a reason why in usability testing purely text-based tests are rather useless).
The point is, that the empirical basis is not given as a protocol of observations of language free behavior but of a behavior which is nearly completely embedded in the understanding and handling of texts. This points to the underlying processes of text understanding which are completelyinternal to the actor. There exists no prewired connection between the observable strings of signs constituting a text and the possible meaning which can be organized by the individual processes of text understanding.
Stopping Here
Having reached this point of reading and trying to understand I decided to stop here: to many questions on all levels of a scientific discourse and the relationships between main concepts and terms appear in the book of Stanovich et al. to be not clear enough. I feel therefore confirmed in my working hypothesis from the beginning, that the concept of intelligence today is far too vague, too ambiguous to contain any useful kernel of meaning any more. And concepts like Rationality, Mind (and many others) seem to do not better.
Chatting with chatGPT4
Since April 2023 I have started to check the ability of chatGPT4 to contribute to a philosophical and scientific discourse. The working hypothesis is, that chatGPT4 is good in summarizing the common concepts, which are used in public texts, but chatGPT is not able for critical evaluations, not for really new creative ideas and in no case for systematic analysis of used methods, used frameworks, their interrelations, their truth-conditons and much more, what it cannot. Nevertheless, it is a good ‘common sense check’. Until now I couldn’t learn anything new from these chats.
If you have read this review with all the details and open questions you will be perhaps a little bit disappointed about the answers from chatGPT4. But keep calm: it is a bit helpful.
Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.
CONTEXT
This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.
Start of the Lecture
Dear Auditorium,
This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.
In this conference many different views are presented that are possible on this topic.
TRANSDISCIPLINARY
My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.
An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.
‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.
HUMAN TEXT GENERATION
The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.
This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.
TEXT CAPABLE MACHINES
With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.
In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.
BIOLOGICAL — NON-BIOLOGICAL
On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.
BLANK INTELLIGENCE TERM
The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.
PREREQUISITES FOR TEXT GENERATION
If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.
VALIDITY
A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.
In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.
In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.
ASYMMETRY: APPLICABLE- NOT APPLICABLE
One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.
With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.
MEANING
This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.
If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):
KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.
LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.
MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.
Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.
FUTURE AND EMOTIONS
In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.
If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.
If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?
The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]
SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS
In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.
The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?
For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.
This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.
The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.
Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.
From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.
The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.
Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.
SUSTAINABLE EMPIRICAL THEORY
With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.
While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.
However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]
If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.
In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]
MAN-MACHINE
After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?
My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.
In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.
The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.
This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.
Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?
Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.
Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.
In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.
But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?
A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.
My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]
COMMENTS
[1] https://zevedi.de/en/topics/ki-text-2/
[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.
Chatbots as Interfaces to the Human Knowledge Cloud?
Already at the end of the documented chat No.4 I had the impression, that an interaction with the chatbot chatGBT is somehow different compared to what most people until know have reported or stated in some reflected way about chatGBT.
In a first — and still a little bit vague — conclusion I have stated: “Apropos ‘rational’: that is a point which did surprise me really: as a kind of a summary it came out “that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. This is clearly not what most philosophers today would say. But it follows from the ‘richness of the facts’ which came as a resonance out of this chat. Not that the chatbot would have given this summary in advance as an important characterization of rationality, but as a human counterpart I could summarize all this properties out of the different separated statements [of chatGBT].”
And, indeed, the millions, if not yet billions, of documents in the world wide web are reflecting fragments of knowledge generated by humans which as a whole form a ‘cloud of knowledge’. The reflected echo of the real world through the medium of human brains is distributed in libraries and in the digital space. No individual person as such can make use of it; it is by far too big to be comprehensible.
Meanwhile search-algorithms can help us to make fragments of this ‘human knowledge cloud’ visible, but the search-results so far are ‘transformed’ in a way which is only of very limited use.
My encounter with chatGBT revealed some new glimpses of a possible new paradigm which perhaps wasn’t intended by openai themselves, but which seems now to be in reach: the individual brain has a limited capacity for ‘many documents’, but it has still an incredible ability to ‘transform’ billions of events into complex abstract patterns, inducing complex networks of relations, complex models, even complex theories.
If one looks to a chatbot like chatGBT as an ‘interface’ between a single human person and the ‘human knowledge cloud’, and this by only using ‘everyday language’, then — depending from the quality of the chatbot — this individual human can only with some ideas and questions ‘trigger’ those documents in the huge ‘human knowledge cloud’ which somehow are ‘fitting’ the triggering words. Thus this individual human person is step-wise encountering those fragments of the ‘human knowledge cloud’ which are in ‘resonance’ with his own words. In a bad case this interaction resembles those ‘echo chambers’ of the internet and the real world where people only get back what they put in.
But in a creative case the input of the individual human person can trigger fragments which are somehow confirming, but somehow non-confirming. This can be a trigger onto the the individual human person to generate some new ideas to feed back again the chatbot interface. While the ‘human knowledge cloud’ at a certain moment of time as such is ‘given’ and will not create ‘something new’, an individual person as an ever running learning process (in principle) could generate new aspects and therefore transform the feedbacks out of the ‘human knowledge cloud’ into some ‘new configurations’.
This principle as such is not new. The human brains are working since millions of years with a mechanisms, where the billions of abstract-concrete cognitive entities derived from perceptions and thinking are ‘stored’ in a more or less ‘unconscious space’ called memory, and only if some actual perception or thinking is ‘touching’ some of these stored entities by some ‘kind of similarity’ these will be re-activated and will become part of the actual perceiving and actual thinking. Thus ‘actual new things’ can become merged with ‘past stored things’ forming some new-past construct. This inherent duality of ‘new-past’ is the basis for that important human capability to be able to transcend the almighty presence and escape its clasp.
If we switch to the level of the human population then the ‘Human knowledge cloud’ plays the role of the ‘unconscious memory system’ collecting ‘past things’, and some kinds of chatbots (if they would be good enough) could play the role of the ‘merger mechanism’ bringing together the ‘past’ (given in the ‘Human knowledge cloud’ ) and the ‘presence’ (given in the actual living human person) and it is then up to the human person to generate some useful new-past patterns of thoughts and possible actions.
In case of the chat No.4 the human actor triggered lots of questions and suggestions to get ‘out of the human knowledge cloud’ those fragments which are somehow corresponding to the triggering words. And after about 8000 signs in the chat the chat ended up in these paragraphs:
Human: OK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?
Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.
By itself the chatbot would not have been able to start with such a characterization. He can ‘reach out’ in the ‘Human knowledge cloud’, but the bot algorithm as such has not — as far as research can see at the moment — any kind of ‘creative rational power’ to transform the ‘given knowledge’ into some new ‘helpful’ knowledge. But at the other side, the human persons would not be able too to make use of ‘all the available knowledge’.
In the course of interaction the human questions could ‘drive’ the bot to collect ‘interesting facts’which could then become ‘accepted’ by the bot because they haven become ‘part of the chat’. Thus at the end of the chat the bot could accept that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. A human person ‘helped him’ to state this. This bot algorithm as such does not know anything and he cannot understand anything. Because chatbots — until now — do not possess real emotions, no real mystical experience, no unconscious or conscious human-like cognitive processes, they have no intelligence in the human format.
It is an open question what kind of ‘intelligence’ they have at all. Until know there is great number of ‘definitions’ around. No one is accepted as ‘that’ definition, especially the relationship between the ‘collection of machine intelligence definitions’ and the possible — also not really existing — collection of ‘human intelligence definitions’ is more or less unclear. Thus we are somehow ‘dreaming’ of intelligence, but nobody can really explain what it is …. We could seriously try, if we want …. but who wants it?
Within the AAI paradigm the following steps will usually be distinguished:
A given problem and a wanted solution.
An assumed context and intended executing and assisting actors.
Assumed non-functional requirements (NFRs).
An actor story (AS) describing at least one task including all the functional requirements.
An usability test, often enhanced with passive or interactive simulations.
An evaluation of the test.
Some repeated improvements.
With these elements one can analyze and design the behavior surface of an assistive actor which can approach the requirements of the stakeholder.
SIMULATION AND GAMING
Comparing these elements with a (computer) game then one can detect immediately that a game characteristically allows to win or to lose. The possible win-states or lose-states stop a game. Often the winning state includes additionally some measures how ‘strong’ or how ‘big’ someone has won or lost a game.
Thus in a game one has besides the rules of the game R which determine what is allowed to do in a game some set of valuelables V which indicate some property, some object, some state as associated with some value v, optionally associated further with some numbers to quantify the value between a maximum and a minimum.
In most board games you will reach an end state where you are the winner or the loser independent of some value. In other games one plays as often as necessary to reach some accumulated value which gives a measure who is better than someone else.
Doing AAI analysis as part of engineering it is usually sufficient to develop an assistive actor with a behavior surface which satisfies all requirements and some basic needs of the executive actors (the classical users).
But this newly developed product (the assistive actor for certain tasks) will be part of a social situation with different human actors. In these social situations there are often more requirements, demands, emotions around than only the original design criteria for the technical product.
For some people the aesthetic properties of a technical product can be important or some other cultural code which associates the technical product with these cultural codes making it precious or not.
Furthermore there can be whole processes within which a product can be used or not, making it precious or not.
COLLECTIVE INTELLIGENCE AND AUTOPOIETIC GAMES
In the case of simulations one has already from the beginning a special context given by the experience and the knowledge of the executive actors. In some cases this knowledge is assumed to be stable or closed. Therefore there is no need to think of the assistive actor as a tool which has not only to support the fulfilling of a defined task but additionally to support the development of the knowledge and experience of the executive actor further. But there are social situations in a city, in an institution, in learning in general where the assumed knowledge and experience is neither complete nor stable. On the contrary in these situations there is a strong need to develop the assumed knowledge further and do this as a joined effort to improve the collective intelligence collectively.
If one sees the models and rules underlying a simulation as a kind of a representation of the assumed collective knowledge then a simulation can help to visualize this knowledge, make it an experience, explore its different consequences. And as far as the executive actors are writing the rules of change by themselves, they understand the simulation and they can change the rules to understand better, what can improve the process and possible goal states. This kind of collective development of models and rules as well as testing can be called autopoietic because the executing actors have two roles:(i) following some rules (which they have defined by themselves) they explore what will happen, when one adheres to these rules; (ii) changing the rules to change the possible outcomes.
This procedure establishes some kind of collective learning within an autopoietic process.
If one enriches this setting with explicit goal states, states of assumed advantages, then one can look at this collective learning as a serious pattern of collective learning by autopoietic games.
For many context like cities, educational institutions, and even companies this kind of collective learning by autopoietic games can be a very productive way to develop the collective intelligence of many people at the same time gaining knowledge by having some exciting fun.
Autopoietic gaming as support for collective learning processes
The context for this text is the whole block dedicated to the AAI (Actor-Actor Interaction) paradigm. The aim of this text is to give the big picture of all dimensions and components of this subject as it shows up during April 2019.
The first dimension introduced is the historical dimension, because this allows a first orientation in the course of events which lead to the actual situation. It starts with the early days of real computers in the thirties and forties of the 20 century.
The second dimension is the engineering dimension which describes the special view within which we are looking onto the overall topic of interactions between human persons and computers (or machines or technology or society). We are interested how to transform a given problem into a valuable solution in a methodological sound way called engineering.
The third dimension is the whole of society because engineering happens always as some process within a society. Society provides the resources which can be used and spends the preferences (values) what is understood as ‘valuable’, as ‘good’.
The fourth dimension is Philosophy as that kind of thinking which takes everything into account which can be thought and within thinking Philosophy clarifies conditions of thinking, possible tools of thinking and has to clarify when some symbolic expression becomes true.
HISTORY
In history we are looking back in the course of events. And this looking back is in a first step guided by the concepts of HCI (Human-Computer Interface) and HMI (Human-Machine Interaction).
It is an interesting phenomenon how the original focus of the interface between human persons and the early computers shifted to the more general picture of interaction because the computer as machine developed rapidly on account of the rapid development of the enabling hardware (HW) the enabling software (SW).
Within the general framework of hardware and software the so-called artificial intelligence (AI) developed first as a sub-topic on its own. Since the last 10 – 20 years it became in a way productive that it now seems to become a normal part of every kind of software. Software and smart software seem to be interchangeable. Thus the new wording of augmented or collective intelligence is emerging intending to bridge the possible gap between humans with their human intelligence and machine intelligence. There is some motivation from the side of society not to allow the impression that the smart (intelligent) machines will replace some day the humans. Instead one is propagating the vision of a new collective shape of intelligence where human and machine intelligence allows a symbiosis where each side gives hist best and receives a maximum in a win-win situation.
What is revealing about the actual situation is the fact that the mainstream is always talking about intelligence but not seriously about learning! Intelligence is by its roots a static concept representing some capabilities at a certain point of time, while learning is the more general dynamic concept that a system can change its behavior depending from actual external stimuli as well as internal states. And such a change includes real changes of some of its internal states. Intelligence does not communicate this dynamics! The most demanding aspect of learning is the need for preferences. Without preferences learning is impossible. Today machine learning is a very weak example of learning because the question of preferences is not a real topic there. One assumes that some reward is available, but one does not really investigate this topic. The rare research trying to do this job is stating that there is not the faintest idea around how a general continuous learning could happen. Human society is of no help for this problem while human societies have a clash of many, often opposite, values, and they have no commonly accepted view how to improve this situation.
ENGINEERING
Engineering is the art and the science to transform a given problem into a valuable and working solution. What is valuable decides the surrounding enabling society and this judgment can change during the course of time. Whether some solution is judged to be working can change during the course of time too but the criteria used for this judgment are more stable because of their adherence to concrete capabilities of technical solutions.
While engineering was and is always a kind of an art and needs such aspects like creativity, innovation, intuition etc. it is also and as far as possible a procedure driven by defined methods how to do things, and these methods are as far as possible backed up by scientific theories. The real engineer therefore synthesizes art, technology and science in a unique way which can not completely be learned in the schools.
In the past as well as in the present engineering has to happen in teams of many, often many thousands or even more, people which coordinate their brains by communication which enables in the individual brains some kind of understanding, of emerging world pictures, which in turn guide the perception, the decisions, and the concrete behavior of everybody. And these cognitive processes are embedded — in every individual team member — in mixtures of desires, emotions, as well as motivations, which can support the cognitive processes or obstruct them. Therefore an optimal result can only be reached if the communication serves all necessary cognitive processes and the interactions between the team members enable the necessary constructive desires, emotions, and motivations.
If an engineering process is done by a small group of dedicated experts — usually triggered by the given problem of an individual stakeholder — this can work well for many situations. It has the flavor of a so-called top-down approach. If the engineering deals with states of affairs where different kinds of people, citizens of some town etc. are affected by the results of such a process, the restriction to a small group of experts can become highly counterproductive. In those cases of a widespread interest it seems promising to include representatives of all the involved persons into the executing team to recognize their experiences and their kinds of preferences. This has to be done in a way which is understandable and appreciative, showing esteem for the others. This manner of extending the team of usual experts by situative experts can be termed bottom-up approach. In this usage of the term bottom-up this is not the opposite to top-down but is reflecting the extend in which members of a society are included insofar they are affected by the results of a process.
SOCIETY
Societies in the past and the present occur in a great variety of value systems, organizational structures, systems of power etc. Engineering processes within a society are depending completely on the available resources of a society and of its value systems.
The population dynamics, the needs and wishes of the people, the real territories, the climate, housing, traffic, and many different things are constantly producing demands to be solved if life shall be able and continue during the course of time.
The self-understanding and the self-management of societies is crucial for their ability to used engineering to improve life. This deserves communication and education to a sufficient extend, appropriate public rules of management, otherwise the necessary understanding and the freedom to act is lacking to use engineering in the right way.
PHILOSOPHY
Without communication no common constructive process can happen. Communication happens according to many implicit rules compressed in the formula who when can speak how about what with whom etc. Communication enables cognitive processes of for instance understanding, explanations, lines of arguments. Especially important for survival is the ability to make true descriptions and the ability to decide whether a statement is true or not. Without this basic ability communication will break down, coordination will break down, life will break down.
The basic discipline to clarify the rules and conditions of true communication, of cognition in general, is called Philosophy. All the more modern empirical disciplines are specializations of the general scope of Philosophy and it is Philosophy which integrates all the special disciplines in one, coherent framework (this is the ideal; actually we are far from this ideal).
Thus to describe the process of engineering driven by different kinds of actors which are coordinating themselves by communication is primarily the task of philosophy with all their sub-disciplines.
Thus some of the topics of Philosophy are language, text, theory, verification of a theory, functions within theories as algorithms, computation in general, inferences of true statements from given theories, and the like.
In this text I apply Philosophy as far as necessary. Especially I am introducing a new process model extending the classical systems engineering approach by including the driving actors explicitly in the formal representation of the process. Learning machines are included as standard tools to improve human thinking and communication. You can name this Augmented Social Learning Systems (ASLS). Compared to the wording Augmented Intelligence (AI) (as used for instance by the IBM marketing) the ASLS concept stresses that the primary point of reference are the biological systems which created and create machine intelligence as a new tool to enhance biological intelligence as part of biological learning systems. Compared to the wording Collective Intelligence (CI) (as propagated by the MIT, especially by Thomas W.Malone and colleagues) the spirit of the CI concept seems to be similar, but perhaps only a weak similarity.
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458