Category Archives: rationality

chat GPT4: Cognitive Science, HCI, Rationality (Stanovich 2016), Dual-Process Theory (Kahnemann)

Last change: Nov 1, 2023

CONTEXT

This text belongs to the theme ‘chats with chatGPT4′.

After trying a review of the book from Stanovich et al. (2016) I have checked a possible contribution from chatGPT 4. As expected, chatGPT4 replicated all the common concepts which are used in the context of cognitive science and Stanovich as well as Kahnemann. All the real questions about methodological details have not been answered. Nevertheless, chatGPT4 lists all main concepts which one should know if one is dealing with these subjects, but that’s it. This is not really science, and has never to do with philosophy or philosophy of science.

PROTOCOL OF THE CHAT

User

What do you know about cognitive science

ChatGPT

Cognitive science is an interdisciplinary field that studies the mind, intelligence, and behavior from multiple perspectives. At its core, it seeks to understand how the human mind works and how it might be replicated or modeled, including through artificial intelligence. The field encompasses a broad range of topics, from the basic processes of cognition to complex interactions with the environment. Here are some key aspects of cognitive science: 1. **Interdisciplinary Nature**: Cognitive science integrates methods and theories from several disciplines, including psychology, neuroscience, linguistics, philosophy, computer science, anthropology, and biology. 2. **Research Areas**: – Perception: Understanding how organisms interpret sensory data. – Attention: Investigating how the mind allocates cognitive resources to various stimuli. – Memory: Exploring the processes involved in storing and retrieving information. – Language: Studying the cognitive processes underlying language acquisition, comprehension, and production. – Problem Solving and Decision Making: Analyzing how humans and other animals think, make decisions, and solve problems. – Consciousness: Examining the nature of subjective experience and self-awareness. 3. **Cognitive Neuroscience**: This branch of cognitive science looks at how brain structures and functions are related to cognitive processes. It makes use of technologies such as fMRI and EEG to study the brain in action. 4. **Computational Models**: Cognitive scientists often construct computational models to simulate aspects of human cognition. These models can range from symbolic representations that mimic logic and reasoning to connectionist models that emulate neural networks. 5. **Philosophy of Mind**: Philosophical questions about the nature of mind, the relationship between mind and body, and the nature of consciousness are central to cognitive science. Philosophy contributes questions about the nature of knowledge, perception, and the structure of language, influencing cognitive theories. 6. **Human-Computer Interaction (HCI)**: This field draws on cognitive science principles to improve the design of user interfaces and enhance user experience with technology. 7. **Education and Application**: Insights from cognitive science have been applied to education, informing teaching methods and instructional design to align with how the brain learns best. 8. **Artificial Intelligence**: Cognitive science and AI are deeply intertwined, with insights from the study of human cognition informing the development of algorithms and systems that aim to replicate aspects of human thought. Cognitive science continues to evolve, with new findings and technologies perpetually reshaping our understanding of the human mind. As it progresses, it offers potential advancements not only in AI but also in how we approach mental health, education, design, and a multitude of other areas that require an understanding of how we think and learn.

User

You are listing many disciplines engaged in cognitive science. How are all these disciplines interrelated?

ChatGPT

The disciplines within cognitive science are interrelated through their shared goal of understanding the mind and cognition. Here’s how each contributes to the field and how they intersect with each other: 1. **Psychology**: Psychology contributes experimental methods and theories about behavior and mental processes. It often serves as the basis for hypotheses about cognitive mechanisms that other disciplines then further investigate. 2. **Neuroscience**: Neuroscience provides insights into the brain structures and neural circuits that underlie cognitive functions. It adds a biological perspective to theories of cognition that can confirm, refute, or refine psychological theories. 3. **Linguistics**: Linguistics studies the structure, use, and psychology of language, which is a complex cognitive ability. Theories of syntax, semantics, and language acquisition inform models of how the brain processes language. 4. **Philosophy**: Philosophy addresses fundamental questions about the nature of knowledge, mind, reality, and values. It offers frameworks for understanding consciousness, intentionality, and the mind-body problem, which can inform empirical research. 5. **Computer Science**: Computer science, particularly artificial intelligence, contributes algorithms and computational models that mimic cognitive processes. The feedback loop between AI and cognitive science is strong: cognitive science inspires AI development, and AI provides tools and metaphors for understanding cognition. 6. **Anthropology**: Anthropology brings in the cultural dimension, studying how cognition varies across different human societies. This helps to differentiate between universal cognitive processes and those that are culturally specific. 7. **Biology**: Biology, especially evolutionary biology, contributes to our understanding of the evolutionary pressures that shaped our cognitive abilities. It provides a framework for understanding why certain cognitive functions may have developed. 8. **Education**: Research in cognitive science directly impacts educational practices by providing insights into how learning and memory work, thus informing teaching strategies and curriculum development. 9. **Human-Computer Interaction (HCI)**: HCI draws on principles from psychology and computer science to design better interfaces. It uses cognitive models to predict how users will interact with technology and to create user experiences that complement human cognitive abilities. The interrelation is often practical; for example: – A cognitive psychologist might devise an experiment based on a philosophical question. – A neuroscientist might use the experiment’s results to guide brain imaging studies. – A linguist might analyze the language data from the experiment. – A computer scientist might model the cognitive process observed in a computational framework. – An anthropologist might compare these results with similar cognitive processes in different cultures. – An educationalist might apply the findings to improve classroom teaching methods. – An HCI researcher might use the findings to design better computer interfaces. The intersection of these disciplines within cognitive science ensures a comprehensive approach to studying cognition, taking into account its complexity and richness by viewing it through multiple lenses.

User

HCI is an interesting example. HCI analyses the interaction between humans and machines with regard to certain tasks and within certain environments. Where do the cognitive models come from which shall be used?

Remark for the following: chatGPT4 offered two versions for an answer. I could select and opted for version 2:

ChatGPT

Human-Computer Interaction (HCI) relies heavily on cognitive models to predict and interpret how users interact with computer systems. These cognitive models are derived from various sources within the broader field of cognitive science: 1. **Psychological Theories**: Many cognitive models in HCI are based on psychological theories about how the human mind processes information. For instance, models like the Cognitive Load Theory help designers understand how to present information to users without overwhelming them. 2. **Cognitive Architectures**: There are comprehensive computational frameworks that aim to simulate human cognitive processes. Examples include the ACT-R (Adaptive Control of Thought-Rational) and Soar architectures. These are often used in HCI to predict how users will learn new systems and solve problems. 3. **Human Information Processing Models**: HCI often uses specific models that describe the stages of human information processing, such as the Model Human Processor, which includes perceptual, cognitive, and motor processors, each with their own cycle times and memories. 4. **Attention Models**: Models of visual attention, such as the See-Think-Do framework, explain how users allocate their attention when interacting with interfaces and can inform the design of systems to make important information more salient. 5. **Memory Models**: Understanding working memory and long-term memory is crucial for interface design. HCI uses models like the Working Memory model by Baddeley and Hitch to predict how users retain and process information while completing tasks. 6. **Error Prediction Models**: Models like Norman’s Seven Stages of Action are used to predict possible errors in human-computer interaction, allowing for the design of error-resistant systems. 7. **Task Models**: Task analysis models, such as GOMS (Goals, Operators, Methods, and Selection rules), are used to estimate how long it takes a user to accomplish a task and to identify ways to streamline the interaction. 8. **Ecological Models**: These models, like the distributed cognition approach, look at cognitive processes as distributed across people, time, and tools, considering the environment in which HCI takes place. 9. **User Mental Models**: HCI considers the mental models that users form of how systems work. Designers try to align interfaces with these mental models to make them more intuitive. 10. **Usability Heuristics**: While not models in the traditional sense, heuristics like those proposed by Jakob Nielsen are based on principles of cognitive psychology and are used to evaluate user interfaces. 11. **Empirical Studies**: Empirical studies, including both qualitative and quantitative user research, continue to inform and refine cognitive models. User testing and observation can lead to the development of new models or the refinement of existing ones. These models are not mutually exclusive and can often be combined or used in parallel to tackle different aspects of HCI design and evaluation. The choice of model(s) typically depends on the particular aspect of human cognition that is most relevant to the interaction being studied or the problem being solved. By applying cognitive models, HCI professionals aim to create user-centered designs that are efficient, effective, and satisfying for the user.

User

Your answer is helpful. But these different contributions are usually not written within a coherent conceptual framework. Is it possible to extract those concepts which are common between these different contributions?

ChatGPT

Indeed, the diverse disciplines within cognitive science and HCI contribute a multitude of models and theories, and these contributions often lack a unifying framework. However, several core concepts are shared across these models and form the bedrock of cognitive science and HCI. Here are some of these shared concepts: 1. **Information Processing**: A key concept across many cognitive models is that humans process information in stages (e.g., encoding, storage, retrieval). This idea, originating from the computer metaphor of the mind, is a common thread in both psychological models and HCI designs. 2. **Attention**: The concept of attention, how it is directed, and how it is managed, is central to understanding human cognition and designing interfaces. Attentional models inform interface design by suggesting where to place important information and how to alert users to changes. 3. **Memory**: The functions of working memory and long-term memory and their limitations are critical concepts. They influence how much information should be presented at once and how interfaces can enhance recall through cues and design. 4. **Mental Models**: Across disciplines, there is recognition that users create mental models of how systems work, and the closeness of these models to the actual workings of the system can affect usability. 5. **Cognitive Load**: Cognitive load refers to the amount of mental effort being used in the working memory. Across HCI and cognitive science, there’s an emphasis on designing systems that manage and minimize cognitive load to prevent errors and improve efficiency. 6. **Problem Solving and Decision Making**: Understanding the cognitive processes behind problem solving and decision making is essential for creating interfaces that support these activities. 7. **Metacognition**: Awareness and understanding of one’s own thought processes are recognized as crucial for learning and interaction with systems. This is reflected in designs that support user control and flexibility. 8. **Perception**: Theories of visual, auditory, and haptic perception are fundamental in designing interfaces that are perceivable and that map onto the human sensory capabilities. 9. **User Experience**: The concept of creating positive interactions encompasses not just the usability but also the affective responses, such as satisfaction and enjoyment. 10. **Error Recognition and Recovery**: Understanding how and why errors occur in human-machine interaction guides the design of systems that help users avoid mistakes or recover from them easily. 11. **Adaptability and Learning**: The concept that users learn and adapt to systems, and that systems can adapt to users, is common across cognitive science and HCI. This is key in designing interfaces that are capable of accommodating users of varying skill levels and learning styles. 12. **Multimodality**: Recognizing that human interaction with the environment is multimodal has led to designs that utilize multiple modes of input and output (e.g., touch, voice, gesture). These shared concepts help to form a bridge across the various contributions to cognitive science and HCI, providing a more cohesive understanding of how to design technology that works in harmony with human cognitive processes. Design principles and guidelines in HCI are often derived from these fundamental cognitive concepts, ensuring that despite the lack of a single, coherent framework, the discipline remains grounded in an understanding of human cognition.

User

I agree with you. Nevertheless, it shows many important questions are still open. Last questions: How would you locate the book ‘The Rational Quotient’ of Stanovich and others in this framework of HCI?

ChatGPT

“The Rationality Quotient” by Keith E. Stanovich and colleagues is actually more directly related to cognitive psychology and the study of rational thought rather than Human-Computer Interaction (HCI) per se. However, its insights can certainly be located within the broader context of HCI and cognitive science. Stanovich’s work focuses on the nature of rationality, distinguishing between different types of cognitive processes: those that are algorithmic (akin to computational processes in the brain), those that relate to reflective thinking and decision-making, and those that pertain to the knowledge base that individuals have at their disposal. Here’s how this relates to HCI: 1. **User Decision Making**: HCI design often considers how users make decisions while interacting with interfaces. Stanovich’s discussion of rational thinking can inform the design of systems that support better decision-making processes. 2. **Error Mitigation**: Understanding why people make irrational decisions or errors can help HCI professionals design systems that reduce the likelihood of user errors. 3. **Cognitive Biases**: The work on rationality includes the study of cognitive biases, which has implications for HCI in terms of designing interfaces that help mitigate these biases or use them to guide users towards more productive behaviors. 4. **Personalization and Adaptability**: Rationality research may influence the development of adaptive systems that personalize the user experience based on an individual’s demonstrated rationality levels or decision-making styles. 5. **Instructional Design**: Insights from studies of rationality can impact how educational technologies are designed to enhance learning and critical thinking skills. 6. **Reflective Interfaces**: The concept of reflective thinking can lead to the creation of interfaces that encourage users to think more deeply about their choices, such as in systems that prompt users to confirm or reconsider actions. While “The Rationality Quotient” doesn’t provide a direct commentary on HCI, the underlying cognitive principles it explores are highly relevant for designing systems that align with human cognitive strengths and compensate for weaknesses. Thus, within the framework of HCI, Stanovich’s work can be seen as contributing to the foundational cognitive understanding necessary to create user-centered designs that support effective, rational, and satisfying interactions with technology.

User

This sounds positive for the book. But I ask myself in which sense the concept of ‘rationality’ can be related to ‘cognitive processes’?

ChatGPT

The concept of “rationality” is deeply intertwined with cognitive processes. Rationality, in the context of cognitive science and psychology, is generally understood as the quality of being reasonable, based on facts or reason. It refers to the way in which individuals think, make decisions, and act based on certain principles of logic and consistency. Here’s how rationality relates to cognitive processes: 1. **Dual-Process Theories**: Theories like those proposed by Daniel Kahneman, which distinguish between two types of thinking processes—fast, automatic, intuitive thought (System 1) and slower, more deliberate, analytical thought (System 2)—are directly related to the study of rationality. Rationality is often associated with the reflective, analytical processes of System 2. 2. **Decision-Making**: Rationality is central to decision-making, which is a complex cognitive process involving the evaluation of options, the assessment of probabilities and outcomes, and the selection of a course of action. Cognitive science studies the heuristics and biases that affect this process. 3. **Logic and Reasoning**: Cognitive processes related to understanding logical structures and applying rules of inference fall under the umbrella of rationality. This includes deductive reasoning, inductive reasoning, and abductive reasoning. 4. **Problem Solving**: Rational approaches to problem solving involve systematic analysis, the application of knowledge, and consideration of potential solutions. This relates to cognitive processes involving working memory, planning, and executive function. 5. **Belief Formation and Revision**: Rationality also pertains to how individuals form, maintain, and revise their beliefs in light of new evidence. This involves cognitive processes related to learning, memory, and critical thinking. 6. **Metacognition**: Rational thinking involves metacognition, which is the awareness and control of one’s own cognitive processes. Metacognitive skills enable individuals to recognize when a problem is beyond their current understanding and to adjust their approach accordingly. 7. **Epistemic Rationality**: This aspect of rationality is concerned with how beliefs are justified and how knowledge is constructed, which is central to cognitive processes involving perception, interpretation, and the assessment of truth claims. Stanovich and others in the field of cognitive science study these aspects of cognition to understand how and why people think and act the way they do, especially when their actions deviate from what might be considered objectively rational. Their work seeks to explain cognitive biases, irrational preferences, and other anomalies in human thought and behavior. In “The Rationality Quotient,” Stanovich et al. propose a framework for understanding and measuring rationality that goes beyond the traditional IQ tests, which typically focus on problem-solving abilities and cognitive capacity. They emphasize the importance of rational thought and behavior in everyday decision-making and the practical outcomes of such thought processes. The concept of rationality thus serves as a lens through which cognitive processes are evaluated in terms of their effectiveness and alignment with goals and reality.

User

You are listing important contributions to the question. You are mentioning the Dual-Process Theory by Kahnemann. Why do you call this a ‘theory’?

ChatGPT

The term “Dual-Process Theory” is often used to describe a range of theoretical frameworks in psychology that posit two distinct modes or systems of thinking that operate when we process information. Daniel Kahneman popularized one of the most well-known dual-process models with his characterization of System 1 and System 2 in his book “Thinking, Fast and Slow.” The reason it’s called a “theory” is because it offers a systematic explanation for observations and findings from numerous psychological studies. A theory in science is a well-substantiated explanation acquired through the scientific method and repeatedly tested and confirmed through observation and experimentation. Here’s why the Dual-Process Theory fits this definition: 1. **Empirical Support**: The Dual-Process Theory is supported by a wide range of experiments and observations in cognitive psychology, neuropsychology, and related fields. 2. **Predictive Power**: It provides predictions about human behavior that can be tested in experimental settings. For instance, it predicts that under certain conditions, people will rely more on fast, intuitive thinking (System 1) than on slow, deliberative thinking (System 2). 3. **Explanatory Scope**: The theory explains a variety of phenomena across different contexts, such as why people struggle with statistical reasoning or are susceptible to cognitive biases. 4. **Integrative Framework**: It integrates findings from various areas of cognitive science into a coherent framework, explaining how different types of thinking contribute to behavior and judgment. 5. **Heuristic Value**: The theory has heuristic value in that it generates new hypotheses and lines of research, driving the field forward. In scientific parlance, a theory is not a mere guess or a tentative hypothesis; it is a comprehensive explanation for a range of phenomena. However, like all scientific theories, Dual-Process Theory is not beyond criticism or revision. As new data emerge, theories are often refined or challenged. Some researchers question the dichotomy proposed by Dual-Process Theories or argue for more nuanced or integrated models of cognition. Despite this, the Dual-Process Theory as presented by Kahneman has been immensely influential in our understanding of human thought and has shaped research in cognitive psychology, economics, and other fields.

User

OK, I agree.

ChatGPT

Great, I’m glad to hear that the information resonated with you! If you have any more questions or if there’s another topic you’re curious about, feel free to ask.

REVIEW: Keith E.Stanovich, Richard F.West, Maggie E.Toplak, “The Rational Quotient. Towards a Test of Rational Thinking”, MIT Press, 2016

(Last change: Nov 1, 2023)

CONTEXT

This text belongs to the overall theme REVIEWS.

In the last months I was engaged with the topic of text-generating algorithms and the possible impact for a scientific discourse (some first notices to this discussion you can find here (https://www.uffmm.org/2023/08/24/homo-sapiens-empirical-and-sustained-empirical-theories-emotions-and-machines-a-sketch/)). In this context it is important to clarify the role and structure of human actors as well as the concept of Intelligence. Meanwhile I have abandoned the word Intelligence completely because the inflationary use in today mainstream pulverises any meaning. Even in one discipline — like psychology — you can find many different concepts. In this context I have read the book of Stanovich et.al to have a prominent example of using the concept of intelligence, there combined with the concept of rationality, which is no less vague.

Introduction

The book “The Rationality Quotient” from 2016 represents not the beginning of a discourse but is a kind of summary of a long lasting discourse with many publications before. This makes this book interesting, but also difficult to read in the beginning, because the book is using nearly on every page theoretical terms, which are assumed to be known to the reader and cites other publications without giving sufficient explanations why exactly these cited publications are important. This is no argument against this book but sheds some light on the reader, who has to learn a lot to understand the text.

A text with the character of summing up its subject is good, because it has a confirmed meaning about the subject which enables a kind of clarity which is typical for that state of elaborated point of view.

In the following review it is not the goal to give a complete account of every detail of this book but only to present the main thesis and then to analyze the used methods and the applied epistemological framework.

Main Thesis of the Book

The reviewing starts with the basic assumptions and the main thesis.

FIGURE 1 : The beginning. Note: the number ‘2015’ has to be corrected to ‘2016’.

FIGURE 2 : First outline of cognition. Note: the number ‘2015’ has to be corrected to ‘2016’.

As mentioned in the introduction you will in the book not find a real overview about the history of psychological research dealing with the concept of Intelligence and also no overview about the historical discourse to the concept of Rationality, whereby the last concept has also a rich tradition in Philosophy. Thus, somehow you have to know it.

There are some clear warnings with regard to the fuzziness of the concept rationality (p.3) as well as to the concept of intelligence (p.15). From a point of view of Philosophy of Science it could be interesting to know what the circumstances are which are causing such a fuzziness, but this is not a topic of the book. The book talks within its own selected conceptual paradigm. Being in the dilemma, of what kind of intelligence paradigm one wants to use, the book has decided to work with the Cattell-Horn-Carroll (CTC) paradigm, which some call a theory. [1]

Directly from the beginning it is explained that the discussion of Intelligence is missing a clear explanation of the full human model of cognition (p.15) and that intelligence tests therefore are mostly measuring only parts of human cognitive functions. (p.21)

Thus let us have a more detailed look to the scenario.

[1] For a first look to the Cattell–Horn–Carroll theory see: https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory, a first overview.

Which point of View?

The book starts with a first characterization of the concept of Rationality within a point of view which is not really clear. From different remarks one gets some hints to modern Cognitive Science (4,6), to Decision Theory (4) and Probability Calculus (9), but a clear description is missing.

And it is declared right from the beginning, that the main aim of the book is the Construction of a rational Thinking Test (4), because for the authors the used Intelligence Tests — later reduced to the Carroll-Horn-Carroll (CHC) type of intelligence test (16) — are too narrow in what they are measuring (15, 16, 21).

Related to the term Rationality the book characterizes some requirements which the term rationality should fulfill (e.g. ‘Rationality as a continuum’ (4), ’empirically based’ (4), ‘operationally grounded’ (4), a ‘strong definition’ (5), a ‘normative one’ (5), ‘normative model of optimum judgment’ (5)), but it is more or less open, what these requirements imply and what tacit assumptions have to be fulfilled, that this will work.

The two requirements ’empirically based’ as well as ‘operationally grounded’ point in the direction of an tacitly assumed concept of an empirical theory, but exactly this concept — and especially in association with the term cognitive science — isn’t really clear today.

Because the authors make in the next pages a lot of statements which claim to be serious, it seems to be important for the discussion in this review text to clarify the conditions of the ‘meaning of language expressions’ and of being classified as ‘being true’.

If we assume — tentatively — that the authors assume a scientific theory to be primarily a text whose expressions have a meaning which can transparently be associated with an empirical fact and if this is the case, then the expression will be understood as being grounded and classified as true, then we have characterized a normal text which can be used in everyday live for the communication of meanings which can become demonstrated as being true.

Is there a difference between such a ‘normal text’ and a ‘scientific theory’? And, especially here, where the context should be a scientific theory within the discipline of cognitive science: what distinguishes a normal text from a ‘scientific theory within cognitive science’?

Because the authors do not explain their conceptual framework called cognitive science we recur here to a most general characterization [2,3] which tells us, that cognitive science is not a single discipline but an interdisciplinary study which is taking from many different disciplines. It has not yet reached a state where all used methods and terms are embedded in one general coherent framework. Thus the relationship of the used conceptual frameworks is mostly fuzzy, unclear. From this follows directly, that the relationship of the different terms to each other — e.g. like ‘underlying preferences’ and ‘well ordered’ — is within such a blurred context rather unclear.

Even the simple characterization of an expression as ‘having an empirical meaning’ is unclear: what are the kinds of empirical subjects and the used terms? According to the list of involved disciplines the disciplines linguistics [4], psychology [5] or neuroscience [6] — besides others — are mentioned. But every of these disciplines is itself today a broad field of methods, not integrated, dealing with a multifaceted subject.

Using an Auxiliary Construction as a Minimal Point of Reference

Instead of becoming somehow paralyzed from these one-and-all characterizations of the individual disciplines one can try to step back and taking a look to basic assumptions about empirical perspectives.

If we take a group of Human Observers which shall investigate these subjects we could make the following assumptions:

  1. Empirical Linguistics is dealing with languages, spoken as well as written by human persons, within certain environments, and these can be observed as empirical entities.
  2. Empirical Psychology is dealing with the behavior of human persons (a kind of biological systems) within certain environments, and these can be observed.
  3. Empirical Neuroscience is dealing with the brain as part of a body which is located in some environment, and this all can be observed.

The empirical observations of certain kinds of empirical phenomena can be used to define more abstract concepts, relations, and processes. These more abstract concepts, relations, and processes have ‘as such’ no empirical meaning! They constitute a formal framework which has to become correlated with empirical facts to get some empirical meaning. As it is known from philosophy of science [7] the combination of empirical concepts within a formal framework of abstracts terms can enable ‘abstract meanings’ which by logical conclusions can produce statements which are — in the moment of stating them — not empirically true, because ‘real future’ has not yet happened. And on account of the ‘generality’ of abstract terms compared to the finiteness and concreteness of empirical facts it can happen, that the inferred statements never will become true. Therefore the mere usage of abstract terms within a text called scientific theory does not guarantee valid empirical statements.

And in general one has to state, that a coherent scientific theory including e.g. linguistics, psychology and neuroscience, is not yet in existence.

To speak of cognitive science as if this represents a clearly defined coherent discipline seems therefore to be misleading.

This raises questions about the project of a constructing a coherent rational thinking test (CART).

[2] See ‘cognitive science’ in wikipedia: https://en.wikipedia.org/wiki/Cognitive_science

[3] See too ‘cognitive science’ in the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/cognitive-science/

[4] See ‘linguistics’ in wikipedia: https://en.wikipedia.org/wiki/Linguistics

[5] See ‘psychology’ in wikipedia: https://en.wikipedia.org/wiki/Psychology

[6] See ‘neuroscience’ in wikipedia: https://en.wikipedia.org/wiki/Neuroscience

[7] See ‘philosophy of science’ in wikipedia: https://en.wikipedia.org/wiki/Philosophy_of_science

‘CART’ TEST FRAMEWORK – A Reconstruction from the point of View of Philosophy of Science

Before I will dig deeper into the theory I try to understand the intended outcome of this theory as some point of reference. The following figure 3 gives some hints.

FIGURE 3 : Outline of the Test Framework based on the Appendix in Stanovich et.al 2016. This Outline is a Reconstruction by the author of this review.

It seems to be important to distinguish at least three main parts of the whole scientific endeavor:

  1. The group of scientists which has decided to process a certain problem.
  2. The generated scientific theory as a text.
  3. The description of a CART Test, which describes a procedure, how the abstract terms of the theory can be associated with real facts.

From the group of scientists (Stanovich et al.) we know that they understand themselves as cognitive scientists (without having a clear characterization, what this means concretely).

The intended scientific theory as a text is here assumed to be realized in the book, which is here the subject of a review.

The description of a CART Test is here taken from the appendix of the book.

To understand the theory it is interesting to see, that in the real test the test system (assumed here as a human person) has to read (and hear?) a instruction, how to proceed with a task form, and then the test system (a human person) has to process the test form in the way it has understood the instructions and the test form as it is.

The result is a completed test form.

And it is then this completed test form which will be rated according to the assumed CART theory.

This complete paradigm raises a whole bunch of questions which to answer here in full is somehow out of range.

Mix-Up of Abstract Terms

Because the Test Scenario presupposes a CART theory and within this theory some kind of a model of intended test users it can be helpful to have a more closer look to this assumed CART model, which is located in a person.

FIGURE 4 : General outline of the logic behind CART according to Stanovich et al. (2016).

The presented cognitive architecture shall present a framework for the CART (Comprehensive Assessment of Rational Thinking), whereby this framework is including a model. The model is not only assumed to contextualize and classify heuristics and tasks, but it also presents Rationality in a way that one can deduce mental characteristics included in rationality.(cf. 37)

Because the term Rationality is not an individual empirical fact but an abstract term of a conceptual framework, this term has as such no meaning. The meaning of this abstract term has to be arranged by relations to other abstract terms which themselves are sufficiently related to concrete empirical statements. And these relations between abstract terms and empirical facts (represented as language expressions) have to be represented in a manner, that it is transparent how the the measured facts are related to the abstract terms.

Here Stanovich et al. is using another abstract term Mind, which is associated with characteristics called mental characteristics: Reflective mind, Algorithmic Level, and Mindware.

And then the text tells that Rationality is presenting mental characteristics. What does this mean? Is rationality different from the mind, who has some characteristics, which can be presented from rationality using somehow the mind, or is rationality nevertheless part of the mind and manifests themself in these mental characteristics? But what kind of the meaning could this be for an abstract term like rationality to be part of the mind? Without an explicit model associated with the term Mind which arranges the other abstract term Rationality within this model there exists no meaning which can be used here.

These considerations are the effect of a text, which uses different abstract terms in a way, which is rather unclear. In a scientific theory this should not be the case.

Measuring Degrees of Rationality

In the beginning of chapter 4 Stanovich et al. are looking back to chapter 1. Here they built up a chain of arguments which illustrate some general perspective (cf. 63):

  1. Rationality has degrees.
  2. These degrees of rationality can be measured.
  3. Measurement is realized by experimental methods of cognitive science.
  4. The measuring is based on the observable behavior of people.
  5. The observable behavior can manifest whether the individual actor (a human person) follows assumed preferences related to an assumed axiom of choice.
  6. Observable behavior which is classified as manifesting assumed internal preferences according to an assumed internal axiom of choice can show descriptive and procedural invariance.
  7. Based on these deduced descriptive and procedural invariance, it can be inferred further, that these actors are behaving as if they are maximizing utility.
  8. It is difficult to assess utility maximization directly.
  9. It is much easier to assess whether one of the axioms of rational choice is being violated.

These statements characterize the Logic of the CART according to Stanovich et al. (cf.64)

A major point in this argumentation is the assumption, that observable behavior is such, that one can deduce from the properties of this behavior those attributes/ properties, which point (i) to an internal model of an axiom of choice, (ii) to internal processes, which manifest the effects of this internal model, (iii) to certain characteristics of these internal processes which allow the deduction of the property of maximizing utility or not.

These are very strong assumptions.

If one takes further into account the explanations from the pages 7f about the required properties for an abstract term axiom of choice (cf. figure 1) then these assumptions appear to be very demanding.

Can it be possible to extract the necessary meaning out of observable behavior in a way, which is clear enough by empirical standards, that this behavior shows property A and not property B ?

As we know from the description of the CART in the appendix of the book (cf. figure 3) the real behavior assumed for an CART is the (i) reading (or hearing?) of an instruction communicated by ordinary English, and then (ii) a behavior deduced from the understanding of the instruction, which (iii) manifests themself in the reading of a form with a text and filling out this form in predefined positions in a required language.

This described procedure is quite common throughout psychology and similar disciplines. But it is well known, that the understanding of language instructions is very error-prone. Furthermore, the presentation of a task as a text is inevitably highly biased and additionally too very error-prone with regard to the understanding (this is a reason why in usability testing purely text-based tests are rather useless).

The point is, that the empirical basis is not given as a protocol of observations of language free behavior but of a behavior which is nearly completely embedded in the understanding and handling of texts. This points to the underlying processes of text understanding which are completely internal to the actor. There exists no prewired connection between the observable strings of signs constituting a text and the possible meaning which can be organized by the individual processes of text understanding.

Stopping Here

Having reached this point of reading and trying to understand I decided to stop here: to many questions on all levels of a scientific discourse and the relationships between main concepts and terms appear in the book of Stanovich et al. to be not clear enough. I feel therefore confirmed in my working hypothesis from the beginning, that the concept of intelligence today is far too vague, too ambiguous to contain any useful kernel of meaning any more. And concepts like Rationality, Mind (and many others) seem to do not better.

Chatting with chatGPT4

Since April 2023 I have started to check the ability of chatGPT4 to contribute to a philosophical and scientific discourse. The working hypothesis is, that chatGPT4 is good in summarizing the common concepts, which are used in public texts, but chatGPT is not able for critical evaluations, not for really new creative ideas and in no case for systematic analysis of used methods, used frameworks, their interrelations, their truth-conditons and much more, what it cannot. Nevertheless, it is a good ‘common sense check’. Until now I couldn’t learn anything new from these chats.

If you have read this review with all the details and open questions you will be perhaps a little bit disappointed about the answers from chatGPT4. But keep calm: it is a bit helpful.

Protocol with chatGPT4

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.

CONTEXT

This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.

TRANSDISCIPLINARY

My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.

HUMAN TEXT GENERATION

The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.

TEXT CAPABLE MACHINES

With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.

BIOLOGICAL — NON-BIOLOGICAL

On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.

BLANK INTELLIGENCE TERM

The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.

PREREQUISITES FOR TEXT GENERATION

If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.

VALIDITY

A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.

ASYMMETRY: APPLICABLE- NOT APPLICABLE

One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.

MEANING

This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.

FUTURE AND EMOTIONS

In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]

SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS

In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.

SUSTAINABLE EMPIRICAL THEORY

With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]

MAN-MACHINE

After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]

COMMENTS

[1] https://zevedi.de/en/topics/ki-text-2/

[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.

Talking with chatGPT. A Philosophical Encounter …


eJournal: uffmm.org
ISSN 2567-6458, 14.January 2023 – 15.January 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a special thought experiment as part of the uffmm.org blog.

INTRODUCTION

Since its recent ‘coming out’ (November 2022) the chatbot chatGPT from openai.com has gained a growing public attention: Posts in blogs, Journals, newspapers, TV … Even I noticed this new presence.

The one way to understand it is ‘looking behind the scene’, looking ‘into the system’, which is in principle possible. But because the direct inspection of the human brain would you tell nearly nothing about its fantastic properties (even not in the light of the interpretation of the brain sciences) and as well, looking into the ‘chips of computer hardware’ would you tell too nearly nothing about what a computer-system is able to do, so it will in a first step be of no great help, to fill many pages with the algorithms of chatGPT. As such an isolated algorithms doesn’t tell too much, and a whole network of such algorithms doesn’t either.

What perhaps can be interesting is a ‘comparison’ between our human behavior (and understanding) with the behavior of chatGPT and some ‘implicit meaning’ embedded in this behavior.

Thus curious about what this chatGBT is I logged into the system and started interacting with the chatGBT software in the offered ‘playground’. The first two chats have been a little bit like ‘swaying back and forth’, my next two chats have become quite interesting.

After this experience I decided to document these chats in this blog in a 1-to-1 fashion thus enabling further reflections about them later. [3] Rather quickly one can learn that this adventure has many different dimensions from ‘simply being impressed’ until ‘deep philosophical thinking’.

OK, let’s start with doing it.

The 8000-signs embracing chat No.4 ends up with the following paragraph:

Human: OK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?

Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.

For more coments see:

COMMENT

wkp := wikipedia

[1] wkp en: chatGPT, URL: https://en.wikipedia.org/wiki/ChatGPT, is a chatbot with the technology of a ‘Generative Pre-trained Transformer’. See more there.

[2] wkp en: chatbot, URL: https://en.wikipedia.org/wiki/Chatbot, is a software which is able to interact in the format of a dialogue (which tells nothing and all …)

[3] During the first two chats I didn’t save the dialogues. This I started beginning with chat No.3

LOGIC. The Theory Of Inquiry (1938) by John Dewey – An oksimo Review – Part 3

eJournal: uffmm.org, ISSN 2567-6458, Aug 19-20, 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

SCOPE

In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [2] Here the author reads the book “Logic. The Theory Of Inquiry” by John Dewey, 1938. [1]

Part I – Chapter I

THE PROBLEM OF LOGICAL SUBJECT-MATTER

In this chapter Dewey tries to characterize the subject-matter of logic. From the year 1938 backwards one can look  into a long history of thoughts  with  at least 2500 years dealing in one or another sense with what has been called ‘logic’. His rough judgment is that the participants of the logic language  game  “proximate subject-matter of logic” seem to be widely in agreement what it is, but in the case of the  “ultimate subject-matter of logic” language game  there seem to exist different or even conflicting opinions.(cf. p.8)

Logic as a philosophic theory

Dewey illustrates the variety of views about the ultimate subject-matter of logic by citing several different positions.(cf. p.10) Having done this Dewey puts all these views together into a kind of a ‘meta-view’ stating that logic “is a branch of philosophic theory and therefore can express different philosophies.”(p.10) But  exercising  philosophy  ” itself must satisfy logical requirements.”(p.10)

And in general he thinks that  “any statement that logic is so-and-so, can … be offered only as a hypothesis and an indication of a position to be developed.”(p.11)

Thus we see here that Dewey declares the ultimate logical subject-matter grounded in some philosophical perspective which should be able “to order and account for what has been called the proximate subject-matter.”(p.11)  But the philosophical theory “must possess the property of verifiable existence in some domain, no matter how hypothetical it is in reference to the field in which it is proposed to apply it.”(p.11) This is an interesting point because this implies the question in which sense a philosophical foundation of logic can offer a verifiable existence.

Inquiry

Dewey gives some  hint for a possible answer by stating “that all logical forms …  arise within the operation of inquiry and are concerned with control of inquiry so that it may yield warranted assertions.”(p.11) While the inquiry as a process is  real, the emergence of logical forms has to be located in the different kinds of interactions between the researchers and some additional environment  in the process. Here should some verifiable reality be involved which is reflected in accompanying language expressions used by the researchers for communication.  This implies further that the used language expressions — which can even talk about other language expressions — are associated with propositions which can be shown to be valid.[4]

And  — with some interesting similarity with the modern concept of ‘diversity’ — he claims that in avoidance of any kind of dogmatism  “any hypothesis, no matter how unfamiliar, should have a fair chance and be judged by its results.”(p.12)

While Dewey is quite clear to use the concept of inquiry as a process leading to some results which are  depending from the starting point and the realized processes, he mentions additionally concepts like  ‘methods’, ‘norms’, ‘instrumentalities’, and  ‘procedures’, but these concepts are rather fuzzy. (cf. p.14f)

Warranted assertibility

Part of an inquiry are the individual actors which have psychological states like ‘doubt’ or ‘belief’ or  ‘understanding’ (knowledge).(p.15) But from these concepts follows nothing about needed  logical forms or rules.(cf.p.16f)  Instead Dewey repeats his requirement with the words “In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.”(p.17) And therefore, instead of using fuzzy concepts like (subjective) ‘doubt’, ‘believe’ or ‘knowledge’, prefers to use the concept “warranted assertibility”. This says not only, that you can assert something, but  that you can assert it also with ‘warranty’ based on the known process which has led to this result.(cf. p.10)

Introducing rationality

At this point the story takes a first ‘new turn’ because Dewey introduces now a first characterization of  the concept ‘rationality’ (which is for him synonymous with ‘reasonableness’). While the basic terms of the descriptions in an inquiry process are at least partially descriptive (empirical)  expressions, they are not completely “devoid of rational standing”.(cf. p.17) Furthermore the classification of final situations in an inquiry as ‘results’ which can be understood as ‘confirmations’ of initial  assumptions, questions or problems,  is only given in relations talking about the whole process and thereby they are talking about matters which are not rooted in  limited descriptive facts only. Or, as Dewey states it, “relations which exist between means (methods) employed and conclusions attained as their consequence.”(p.17) Therefore the following practical principle is valid: “It is reasonable to search for and select the means that will, with the maximum probability, yield the consequences which are intended.”(p.18)  And: “Hence,… the descriptive statement of methods that achieve progressively stable beliefs, or warranted assertibility, is also a rational statement in case the relation between them as means and assertibility as consequence is ascertained.”(p.18)

Suggested framework for ‘rationality’

Although Dewey does not exactly define the format of relations between selected means and successful consequences it seems ‘intuitively’ clear that the researchers have to have some ‘idea’ of such a relation which serves then as a new ‘ground for abstract meaning’ in their ‘thinking’. Within the oksimo paradigm [2] one could describe the problem at hand as follows:

  1. The researchers participating in an inquiry process have perceptions of the process.
  2. They have associated cognitive processing as well as language processing, where both are bi-directional mapped into each other, but not 1-to-1.
  3. They can describe the individual properties, objects, actors, actions etc. which are part of the process in a timely order.
  4. They can with their cognitive processing build more abstract concepts based on these primary concepts.
  5. They can encode these more abstract cognitive structures and processes in propositions (and expressions) which correspond to these more abstract cognitive entities.
  6. They can construct rule-like cognitive structures (within the oksimo paradigm  called ‘change rules‘) with corresponding propositions (and expressions).
  7. They can evaluate those change rules whether they describe ‘successful‘ consequences.
  8. Change rules with successful consequences can become building blocks for those rules, which can be used for inferences/ deductions.

Thus one can look to the formal aspect of formal relations which can be generated by an inference mechanism, but such a formal inference must not necessarily yield results which are empirically sound. Whether this will be the case is a job on its own dealing with the encoded meaning of the inferred expressions and the outcome of the inquiry.(cf. p.19,21)

Limitations of formal logic

From this follows that the concrete logical operators as part of the inference machinery have to be qualified by their role within the more general relation between goals, means and success. The standard operators of modern formal logic are only a few and they are designed for a domain where you have a meaning space  with only two objects: ‘being true’, being false’. In the real world of everyday experience we have a nearly infinite space of meanings. To describe this everyday large meaning space the standard logic of today is too limited. Normal language teaches us, how we can generate as many operators as we need  only by using normal language. Inferring operators directly from normal language is not only more powerful but at the same time much, much easier to apply.[2]

Inquiry process – re-formulated

Let us fix a first hypothesis here. The ideas of Dewey can be re-framed with the following assumptions:

  1. By doing an inquiry process with some problem  (question,…) at the start and proceeding with clearly defined actions, we can reach final states which either are classified as being a positive answer (success) of the problem of the beginning or not.
  2. If there exists a repeatable inquiry process with positive answers the whole process can be understood as a new ‘recipe’ (= complex operation, procedure, complex method, complex rule,law,  …) how to get positive answers for certain kinds of questions.
  3. If a recipe is available from preceding experiments one can use this recipe to ‘plan’ a new process to reach a certain ‘result’ (‘outcome’, ‘answer’, …).
  4. The amount of failures as part of the whole number of trials in applying a recipe can be used to get some measure for the probability and quality of the recipe.
  5. The description of a recipe needs a meta-level of ‘looking at’ the process. This meta-level description is sound (‘valid’) by the interaction with reality but as such the description  includes some abstraction which enables a minimal rationality.
Habit

At this point Dewey introduces another term ‘habit’ which is not really very clear and which not really does explain more, but — for whatever reason — he introduces such a term.(cf. p.21f)

The intuition behind the term ‘habit’ is that independent of the language dimension there exists the real process driven by real actors doing real actions. It is further — tacitly —  assumed that these real actors have some ‘internal processing’ which is ‘causing’ the observable actions. If these observable actions can be understood/ interpreted as an ‘inquiry process’ leading to some ‘positive answers’ then Dewey calls the underlying processes all together a ‘habit’: “Any habit is a way or manner of action, not a particular act or deed. “(p.20) If one observes such a real process one can describe it with language expressions; then it gets the format of a ‘rule’, a principle’ or a ‘law’.(cf. p.20)

If one would throw away the concept  ‘habit’, nothing would be missing. Whichever  internal processes are assumed, a description of these will be bound to its observability and will depend of some minimal  language mechanisms. These must be explained. Everything beyond these is not necessary to explain rational behavior.[5]

At the end of chapter I Dewey points to some additional aspects in the context of logic. One aspect is the progressive character of logic as discipline in the course of history.(cf. p.22)[6]

Operational

Another aspect is introduced by his statement “The subject-matter of logic is determined operationally.”(p.22) And he characterizes the meaning of the term ‘operational’ as representing the “conditions by which subject-matter is (1) rendered fit to serve as means and (2) actually functions as such means in effecting the objective transformation which is the end of the inquiry.”(p.22) Thus, again, the concept of inquiry is the general framework organizing means to get to a successful end. This inquiry has an empirical material (or ‘existential‘) basis which additionally can be described symbolically. The material basis can be characterized by parts of it called ‘means’ which are necessary to enable objective transformations leading to the end of the inquiry.(cf. p.22f)

One has to consider at this point that the fact of the existential (empirical) basis of every inquiry process should not mislead to the view that this can work without a symbolic dimension! Besides extremely simple processes every process needs for its coordination between different brains a symbolic communication which has to use certain expressions of a language. Thus   the cognitive concepts of the empirical  means and the followed rules can only get ‘fixed’ and made ‘clear’ with the usage of accompanying symbolic expressions.

Postulational logic

Another aspect mentioned by Dewey is given by the statement: “Logical forms are postulational.“(p.24) Embedded in the framework of an inquiry Dewey identifies requirements (demands, postulates, …) in the beginning of the inquiry which have to be fulfilled through the inquiry process. And Dewey sees such requirements as part of the inquiry process itself.(cf. p.24f) If during such an inquiry process some kinds of logical postulates will be used they have no right on their own independent of the real process! They can only be used as long as they are in agreement with the real process.  With the words of Dewey: “A postulate is thus neither arbitrary nor externally a priori. It is not the former because it issues from the relation of means to the end to be reached. It is not the latter, because it is not imposed upon inquiry from without, but is an acknowledgement of that to which the undertaking of inquiry commits us.”(p.26)  .

Logic naturalistic

Dewey comments further on the topic that “Logic is a naturalistic theory.“(p.27 In some sense this is trivial because humans are biological systems and therefore every process is a biological (natural) process, also logical thinking as part of it.

Logic is social

Dewey mentions further that “Logic is a social discipline.“(p.27) This follows from the fact that “man is naturally a being that lives in association with others in communities possessing language, and therefore enjoying a transmitted culture. Inquiry is a mode of activity that is socially conditioned and that has cultural consequences.”(p.27)  And therefore: “Any theory of logic has to take some stand on the question whether symbols are ready-made clothing for meanings that subsist independently, or whether they are necessary conditions for the existence of meanings —  in terms often used, whether language is the dress of ‘thought’ or is something without which ‘thought’ cannot be.” (27f) This can be put also in the following  general formula by Dewey: “…in every interaction that involves intelligent direction, the physical environment is part of a more inclusive social or cultural environment.” (p.28) The central means of culture is Language, which “is the medium in which culture exists and through which it is transmitted. Phenomena that are not recorded cannot be even discussed. Language is the record that perpetuates occurrences and renders them amenable to public consideration. On the other hand, ideas or meanings that exist only in symbols that are not communicable are fantastic beyond imagination”.(p.28)

Autonomous logic

The final aspect about logic which is mentioned by Dewey looks to the position which states that “Logic is autonomous“.(p.29) Although the position of the autonomy of logic — in various varieties — is very common in history, but Dewey argues against this position. The main point is — as already discussed before — that the open framework of an inquiry gives the main point of reference and logic must fit to this framework.[7]

SOME DISCUSSION

For a discussion of these ideas of Dewey see the next uocoming post.

COMMENTS

[1] John Dewey, Logic. The Theory Of Inquiry, New York, Henry Holt and Company, 1938  (see: https://archive.org/details/JohnDeweyLogicTheTheoryOfInquiry with several formats; I am using the kindle (= mobi) format: https://archive.org/download/JohnDeweyLogicTheTheoryOfInquiry/%5BJohn_Dewey%5D_Logic_-_The_Theory_of_Inquiry.mobi . This is for the direct work with a text very convenient.  Additionally I am using a free reader ‘foliate’ under ubuntu 20.04: https://github.com/johnfactotum/foliate/releases/).  The page numbers in the text of the review — like (p.13) — are the page numbers of the ebook as indicated in the ebook-reader foliate.(There exists no kindle-version for linux (although amazon couldn’t work without linux servers!))

[2] Gerd Doeben-Henisch, 2021, uffmm.org, THE OKSIMO PARADIGM
An Introduction (Version 2), https://www.uffmm.org/wp-content/uploads/2021/03/oksimo-v1-part1-v2.pdf

[3] The new oksimo paradigm does exactly this. See oksimo.org

[4] For the conceptual framework for the term ‘proposition’ see the preceding part 2, where the author describes the basic epistemological assumptions of the oksimo paradigm.

[5] Clearly it is possible and desirable to extend our knowledge about the internal processing of human persons. This is mainly the subject-matter of biology, brain research, and physiology. Other disciplines are close by like Psychology, ethology, linguistics, phonetics etc. The main problem with all these disciplines is that they are methodologically disconnected: a really integrated theory is not yet possible and not in existence. Examples of integrations like Neuro-Psychology are far from what  they should be.

[6] A very good overview about the development of logic can be found in the book The Development of Logic by William and Martha Kneale. First published 1962 with many successive corrected reprints by Clarendon Press, Oxford (and other cities.)

[7] Today we have the general problem that the concept of formal logic has developed the concept of logical inference in so many divergent directions that it is not a simple problem to evaluate all these different ‘kinds of logic’.

MEDIA

This is another unplugged recording dealing with the main idea of Dewey in chapter I: what is logic and how relates logic to a scientific inquiry.

ENGINEERING AND SOCIETY: The Role of Preferences

eJournal: uffmm.org,
ISSN 2567-6458, 4.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

FINAL HYPOTHESIS

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.

CONTEXT

The overall context is given by the description of the Actor-Actor Interaction (AAI) paradigm as a whole.  In this text the special relationship between engineering and the surrounding society is in the focus. And within this very broad and rich relationship the main interest lies in the ethical dimension here understood as those preferences of a society which are more supported than others. It is assumed that such preferences manifesting themselves  in real actions within a space of many other options are pointing to hidden values which guide the decisions of the members of a society. Thus values are hypothetical constructs based on observable actions within a cognitively assumed space of possible alternatives. These cognitively represented possibilities are usually only given in a mixture of explicitly stated symbolic statements and different unconscious factors which are influencing the decisions which are causing the observable actions.

These assumptions represent  until today not a common opinion and are not condensed in some theoretical text. Nevertheless I am using these assumptions here because they help to shed some light on the rather complex process of finding a real solution to a stated problem which is rooted in the cognitive space of the participants of the engineering process. To work with these assumptions in concrete development processes can support a further clarification of all these concepts.

ENGINEERING AND SOCIETY

DUAL: REAL AND COGNITIVE

The relationship between an engineering process and the preferences of a society
The relationship between an engineering process and the preferences of a society

As assumed in the AAI paradigm the engineering process is that process which connects the  event of  stating a problem combined with a first vision of a solution with a final concrete working solution.

The main characteristic of such an engineering process is the dual character of a continuous interaction between the cognitive space of all participants of the process with real world objects, actions, and processes. The real world as such is a lose collection of real things, to some extend connected by regularities inherent in natural things, but the visions of possible states, possible different connections, possible new processes is bound to the cognitive space of biological actors, especially to humans as exemplars of the homo sapiens species.

Thus it is a major factor of training, learning, and education in general to see how the real world can be mapped into some cognitive structures, how the cognitive structures can be transformed by cognitive operations into new structures and how these new cognitive structures can be re-mapped into the real world of bodies.

Within the cognitive dimension exists nearly infinite sets of possible alternatives, which all indicate possible states of a world, whose feasibility is more or less convincing. Limited by time and resources it is usually not possible to explore all these cognitively tapped spaces whether and how they work, what are possible side effects etc.

PREFERENCES

Somehow by nature, somehow by past experience biological system — like the home sapiens — have developed   cultural procedures to induce preferences how one selects possible options, which one should be selected, under which circumstances and with even more constraints. In some situations these preferences can be helpful, in others they can  hide possibilities which afterwards can be  re-detected as being very valuable.

Thus every engineering process which starts  a transformation process from some cognitively given point of view to a new cognitively point of view with a following up translation into some real thing is sharing its cognitive space with possible preferences of  the cognitive space of the surrounding society.

It is an open case whether the engineers as the experts have an experimental, creative attitude to explore without dogmatic constraints the   possible cognitive spaces to find new solutions which can improve life or not. If one assumes that there exist no absolute preferences on account of the substantially limit knowledge of mankind at every point of time and inferring from this fact the necessity to extend an actual knowledge further to enable the mastering of an open unknown future then the engineers will try to explore seriously all possibilities without constraints to extend the power of engineering deeper into the heart of the known as well as unknown universe.

EXPLORING COGNITIVE POSSIBILITIES

At the start one has only a rough description of the problem and a rough vision of a wanted solution which gives some direction for the search of an optimal solution. This direction represents also a kind of a preference what is wanted as the outcome of the process.

On account of the inherent duality of human thinking and communication embracing the cognitive space as well as the realm of real things which both are connected by complex mappings realized by the brain which operates  nearly completely unconscious a long process of concrete real and cognitive actions is necessary to materialize cognitive realities within a  communication process. Main modes of materialization are the usage of symbolic languages, paintings (diagrams), physical models, algorithms for computation and simulations, and especially gaming (in several different modes).

As everybody can know  these communication processes are not simple, can be a source of  confusions, and the coordination of different brains with different cognitive spaces as well as different layouts of unconscious factors  is a difficult and very demanding endeavor.

The communication mode gaming is of a special interest here  because it is one of the oldest and most natural modes to learn but in the official education processes in schools and  universities (and in companies) it was until recently not part of the official curricula. But it is the only mode where one can exercise the dimensions of preferences explicitly in combination with an exploring process and — if one wants — with the explicit social dimension of having more than one brain involved.

In the last about 50 – 100 years the term project has gained more and more acceptance and indeed the organization of projects resembles a game but it is usually handled as a hierarchical, constraints-driven process where creativity and concurrent developing (= gaming) is not a main topic. Even if companies allow concurrent development teams these teams are cognitively separated and the implicit cognitive structures are black boxes which can not be evaluated as such.

In the presupposed AAI paradigm here the open creative space has a high priority to increase the chance for innovation. Innovation is the most valuable property in face of an unknown future!

While the open space for a real creativity has to be executed in all the mentioned modes of communication the final gaming mode is of special importance.  To enable a gaming process one has explicitly to define explicit win-lose states. This  objectifies values/ preferences hidden   in the cognitive space before. Such an  objectification makes things transparent, enables more rationality and allows the explicit testing of these defined win-lose states as feasible or not. Only tested hypothesis represent tested empirical knowledge. And because in a gaming mode whole groups or even all members of a social network can participate in a  learning process of the functioning and possible outcome of a presented solution everybody can be included.  This implies a common sharing of experience and knowledge which simplifies the communication and therefore the coordination of the different brains with their unconsciousness a lot.

TESTING AND EVALUATION

Testing a proposed solution is another expression for measuring the solution. Measuring is understood here as a basic comparison between the target to be measured (here the proposed solution) and the before agreed norm which shall be used as point of reference for the comparison.

But what can be a before agreed norm?

Some aspects can be mentioned here:

  1. First of all there is the proposed solution as such, which is here a proposal for a possible assistive actor in an assumed environment for some intended executive actors which has to fulfill some job (task).
  2. Part of this proposed solution are given constraints and non-functional requirements.
  3. Part of this proposed solution are some preferences as win-lose states which have to be reached.
  4. Another difficult to define factor are the executive actors if they are biological systems. Biological systems with their basic built in ability to act free, to be learning systems, and this associated with a not-definable large unconscious realm.

Given the explicit preferences constrained by many assumptions one can test only, whether the invited test persons understood as possible instances of the  intended executive actors are able to fulfill the defined task(s) in some predefined amount of time within an allowed threshold of making errors with an expected percentage of solved sub-tasks together with a sufficient subjective satisfaction with the whole process.

But because biological executive actors are learning systems they  will behave in different repeated  tests differently, they can furthermore change their motivations and   their interests, they can change their emotional commitment, and because of their   built-in basic freedom to act there can be no 100% probability that they will act at time t as they have acted all the time before.

Thus for all kinds of jobs where the process is more or less fixed, where nothing new  will happen, the participation of biological executive actors in such a process is questionable. It seems (hypothesis), that biological executing actors are better placed  in jobs where there is some minimal rate of curiosity, of innovation, and of creativity combined with learning.

If this hypothesis is empirically sound (as it seems), then all jobs where human persons are involved should have more the character of games then something else.

It is an interesting side note that the actual research in robotics under the label of developmental robotics is struck by the problem how one can make robots continuously learning following interesting preferences. Given a preference an algorithm can work — under certain circumstances — often better than a human person to find an optimal solution, but lacking such a preference the algorithm is lost. And actually there exists not the faintest idea how algorithms should acquire that kind of preferences which are interesting and important for an unknown future.

On the contrary, humans are known to be creative, innovative, detecting new preferences etc. but they have only limited capacities to explore these creative findings until some telling endpoint.

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.