Autor: Gerd Doeben-Henisch
Aug 30, 2023 – Aug 30, 2023
This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests , the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here.
In the following series of dialogues with chatGPT4 the software stated in the beginning that it cannot read a book like a human person. Asking for a special book with the title “Das Experiment sind wir” by the author Christian Stöcker? the software answered, that it doesn’t know the book. Then I tried to ask the software, whether it can nevertheless ‘learn’ something based on the documents given to the software until the year 2021. The answer was quite revealing: “No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.” I didn’t give up to understand what the software is able to do if not ‘reading a book’ or to ‘learn’. I asked: “What can you extract from your text base related to the question what can happen in the head of people if they read a book?” I got a list of general properties summarized from different source (without mentioning these sources). I continued asking the software not about the leaning inside itself but what the software knows about the learning inside humans persons reading a book: “How do you describe the process of learning in human persons while reading a book?” The software did mention some aspects which have to be considered while a human person reads a book. But this gives only some ‘static view’ of the structure being active in the process of reading. More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book? The software gives a detailed breakdown of the learning process in human persons while reading, but again these are statements without any ‘backing up’, no sources,no contexts, but in the everyday world — especially in science –, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’? Then the author did ask the software “Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?” The answer again was very simple: “I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.” Because it is known that wikipedia encyclopedia provides always explicit sources for the texts the author asked directly: Do you know the textbase wikipedia? Clearly, the chatGPT4 knows Wikipedia; besides many positive remarks it answered: “The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.” This sounds quite good for Wikipedia. Thus it is interesting how chatGPT4 compares itself to Wikipedia: Question: “Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?” chatGPT4 doesn’t give a clear answer; it summarizes points for Wikipedia and itself, but leaves the question open. Thus the author continues asking for a more ‘general criterion’, which should be valid for both Wikipedia and chatGPT4: “Which criteria do you know for a good scientific description? ” As usual chatGPT4 gives a long list of criteria . the author asks back “Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.” The answer chatGPT4 gives makes itself nearly unusable for any serious scientific work. Yes, you can use chatGPT4 for some first general information, but without knowing more by yourself, you are really lost. It is surely quite more fruitful to check Wikipedia directly and using the the sources there for further investigations.
Reading a book
I started spontaneously to talk with chatGPT4 about a book from Christian Ströcker with the title “Das Experiment sind wir”, published 2020.
Here is the dialogue:
Can you read a book?
I don’t “read” books in the same way humans do. Instead, I was trained on vast amounts of text data, which includes information from books, articles, websites, and other written sources up to my last update in September 2021.
You can ask me about the content, summaries, themes, and other information from many books, especially those that are well-known or that were published before my last training cut-off. If the book was published after September 2021 or is particularly obscure, I might not have information about it.
If you have a particular book or topic in mind, feel free to ask, and I’ll do my best to provide relevant information!
What can you say about the book entitled “Das Experiment sind wir” by the author Christian Stöcker?
As of my last update in September 2021, “Das Experiment sind wir” by Christian Stöcker hasn’t been one of the primary sources I was trained on. However, Christian Stöcker is known as a journalist and professor in the field of digital communication.
If “Das Experiment sind wir” was published or gained significant attention after my last update, I may not have detailed information about its content, themes, or specific insights.
For a detailed overview or analysis of the book, I would recommend reading reviews, summaries, or author interviews, or accessing the book directly. If you have other questions or need information on a related topic, I’m here to help!
OK, chatGPT4 doesn’t know this book. This is OK, even a high performance System can not know everything.
But interesting is his confession, that chatGPT4 can not read a book as a human person would do. In this sense chatGPT4 is not a ‘typical reader’ but — as wee indeed know — a piece of software, managed by human persons, which decide, which kinds of texts chatGPT4 has to ‘store’ in his memories and which are the basis for hier ‘algorithmic processing’.
But there is left the question whether chatGBT4 at least can learn by its own starting with the text base given to him.
Thus if you have a text base as a point of reference, are you able to learn in some sense by your own?
No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.
Here’s an analogy: Imagine an extremely comprehensive book. This book can answer a vast array of questions based on its contents, but it can’t write new chapters or update its pages on its own. In a similar manner, I can generate answers based on the data I was trained on, but I can’t independently update that data or “learn” new things.
So, while I can generate responses based on a broad range of existing knowledge, I can’t assimilate or understand new information or events after my last training cut-off in September 2021.
As expected, chatGBT4 can not learn as we humans can learn. To generate summaries can be of some help, but this can not replace ‘true learning’.
But let us stay with the topic of ‘reading a book’:
What can you extract from your text base related to the question what can happen in the head of people if they read a book?
Reading a book can have various cognitive, emotional, and psychological effects on individuals. Here are some insights based on literature, cognitive science, and psychological studies up to my last training cut-off:
- Cognitive Stimulation: Engaging with the written word can stimulate the brain. Studies have suggested that mental stimulation can slow the progress (or possibly even prevent) Alzheimer’s and dementia, as keeping your brain active and engaged prevents it from losing power.
- Mental Escape: For many, reading a book can act as an escape from the real world. Immersing oneself in a different world, whether it’s a fantasy realm or an historical setting, provides a break from everyday stressors.
- Increased Knowledge: Naturally, books inform and educate. This builds a knowledge base that can be applied in real-life situations.
- Vocabulary Expansion: Reading regularly can expand one’s vocabulary more than direct teaching or verbal communications.
- Improved Focus and Concentration: Unlike skimming short web articles or watching quick videos, reading a book requires a longer attention span and can help improve one’s focus and concentration.
- Stress Reduction: A well-written novel or non-fiction book can transport the reader to other realms, distracting them and helping them to de-stress more effectively than other methods.
- Improved Analytical Skills: Engaging with complex narratives and plots can help readers improve their analytical skills as they ponder plot developments and character motivations.
- Empathy Development: Reading fiction, in particular, allows readers to live vicariously through characters, experiencing a vast range of emotions and situations. This can help in developing a deeper understanding and empathy for others.
- Mental Tranquility: Reading subjects like poetry or certain religious texts can provide tranquility, help lower blood pressure, and reduce stress.
- Memory Improvement: Reading involves remembering various characters, their backgrounds, histories, and nuances. This can create new brain pathways and strengthen existing ones.
- Entertainment: Of course, one of the primary reasons people read is for entertainment. A good story can be incredibly engaging, offering hours of enjoyment.
It’s worth noting that the impact of reading can vary based on the type of book (e.g., fiction vs. non-fiction), the style of writing, the personal experiences and preferences of the reader, and the context in which they’re reading.
chatGPT4 mentions some aspects which have to be considered while a human person reads a book. This gives some ‘static view’ of the structure being active in the process of reading.
More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book?
How do you describe the process of learning in human persons while reading a book?
Reading a book is a complex cognitive process that involves multiple levels of mental engagement and comprehension. When a person reads, several processes related to learning occur simultaneously. Here’s a detailed breakdown of the learning process while reading:
- Perceptual Processing: This is the basic recognition of words and letters. As your eyes scan a page, the visual cortex interprets the shapes of letters and groups of letters as words. This process becomes automatic with practice and training.
- Decoding: This involves understanding the phonetic value of words, which is especially crucial for unfamiliar terms or names. This skill is emphasized heavily in early reading education when children sound out words.
- Semantic Processing: Once the words are recognized, the brain works to understand their meaning. This involves linking the words to their associated concepts in the brain.
- Syntax and Grammar Processing: The brain understands the structure of sentences and how words relate to each other, determining the relationships among verbs, nouns, adjectives, etc.
- Working Memory: This is essential for maintaining an understanding of a sentence or paragraph. As you read, you keep the beginning of a sentence in mind as you progress to the end, allowing you to grasp the entire concept.
- Building Connections: Learning often involves connecting new information with prior knowledge. When reading, you constantly relate what you are consuming to what you already know, which can deepen comprehension.
- Critical Thinking: Beyond just understanding the content, readers often evaluate, analyze, and critique what they’re reading. They may ask themselves if they agree with the author, if the information is credible, or how it relates to other things they know.
- Visualization: Good readers often create mental images of what they are reading. This is especially true for narrative texts where imagining scenes, characters, or events can enhance comprehension.
- Emotional Engagement: The content in books can evoke emotions. Whether it’s a novel that tugs at the heartstrings or a non-fiction book that arouses anger or passion about a topic, emotional engagement can increase retention and internalization of content.
- Metacognition: This involves thinking about one’s own thinking. Effective readers often monitor their understanding as they read, recognizing when they’re confused and employing strategies to improve comprehension.
- Retention & Long-Term Memory: Once the book is done, the process of shifting information from working memory to long-term memory begins. Repetition, discussion, and application of knowledge can aid this transfer.
- Application: The ultimate goal of learning is often to apply knowledge. After reading, a person may take actions, make decisions, or change beliefs based on what they’ve learned.
It’s important to note that everyone’s reading process might be a bit different. Some people naturally visualize more, while others might focus on critical analysis. Additionally, background knowledge, context, and the reader’s purpose (e.g., reading for pleasure versus studying for an exam) can influence how these processes unfold.
chatGPT4 presents some facts responding to the question which are — as such — really interesting, but in the everyday world especially in science, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’?
Let us ask chatGPT4:
Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?
I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.
The steps and processes I outlined for learning while reading a book are based on widely accepted theories and research in the fields of cognitive psychology, education, and linguistics. If you’re interested in diving deeper into these topics, you might consider exploring:
- Cognitive psychology textbooks: These delve into the processes of perception, working memory, long-term memory, and more.
- Reading and literacy research: Scholars in this field have conducted in-depth studies on the processes of decoding, comprehension, and the strategies good readers use.
- Educational psychology: This field explores topics related to learning, including metacognition, critical thinking, and the application of knowledge.
If you’d like specific book or article recommendations, or more in-depth explanations on any of these topics, I can certainly provide them.
Do you know the textbase wikipedia?
Yes, I’m familiar with Wikipedia. Wikipedia is a free online encyclopedia that is collaboratively edited by volunteers from around the world. It’s one of the most visited websites globally and provides information on a vast array of topics.
Wikipedia is known for its “wiki” nature, which means that almost any article can be edited by any user with internet access. This collaborative approach has made Wikipedia a rich and ever-evolving source of information. However, because of this open-editing system, it’s always a good idea to verify critical information from Wikipedia with other reliable sources.
The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.
Would you like more information or have a specific question related to Wikipedia?
chatGPT4 knows wikipedia and gives a quite fair description of wikipedia (probably the English version). Of interest is how chatGPT4 describes the characteristic differences with wikipedia.
Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?
Wikipedia and I serve different purposes, and the trustworthiness of information from either source depends on the context and specific use-case.
Advantages of Wikipedia:
- Citations: Wikipedia articles often have extensive lists of references, which allow readers to trace back the origin of specific pieces of information. This provides a level of transparency.
- Collaboration: Wikipedia is collectively edited by a vast number of individuals, which can mean many eyes reviewing and refining content.
- Current Events: Wikipedia is often updated rapidly in the face of current events, so for very recent information, it can sometimes be more up-to-date than my last training data.
Advantages of ChatGPT (me):
- Broad Training Data: I’ve been trained on diverse datasets, including books, articles, and other content. This allows me to provide a wide range of information.
- Consistent and Neutral: While Wikipedia aims for a neutral point of view, individual articles might sometimes reflect biases of the contributors. My design aims to minimize bias and provide consistent responses.
- Conversational Interface: Users can have a back-and-forth dialogue with me, making it easier to ask follow-up questions or dive deeper into topics.
- Vandalism: Wikipedia can be edited by anyone, which sometimes leads to misinformation due to vandalism. However, popular pages often have vigilant editors and protective measures in place.
- Knowledge Cutoff: My last training data is from September 2021, so events or advancements after that date are not within my current knowledge.
Ultimately, neither Wikipedia nor I should be seen as infallible. Both can serve as starting points for research, but critical information should be cross-referenced with primary sources or authoritative references. It’s always a good practice to be discerning and critical when interpreting information from any source.
The characterization of wikipedia is still quite fair. But in some sens does chatGPT4 ‘downplay’ it’s lack of real sources, which are the basis of any serious arguing.
Which criteria do you know for a good scientific description?
Scientific descriptions are meant to be precise, clear, and replicable. To ensure a high-quality scientific description, several criteria should be met:
- Accuracy: The description should correctly represent observations, data, or results without exaggeration or misrepresentation.
- Clarity: The language used should be clear and free from ambiguity. It should be easily understood by those with appropriate background knowledge in the field.
- Completeness: All relevant details should be included, especially those that would be necessary for replication or to understand the context of the study.
- Consistency: Descriptions should be consistent in terms of units of measurement, terminology, and methodology.
- Objectivity: The description should be free from personal bias. Instead of using subjective terms, rely on objective measurements and observations.
- Replicability: Enough detail should be provided so that another scientist with the same background knowledge and resources can replicate the study or experiment.
- Relevance: Only pertinent details should be included. While completeness is important, overloading a description with irrelevant information can detract from its usefulness.
- Use of Standardized Terminology: Using accepted and standardized terms in the field ensures that the description is understood universally among experts.
- Citing Sources: Any claim, idea, or method that isn’t original to the current study should be appropriately cited, giving credit to the original sources and allowing readers to delve deeper into the background if they wish.
- Avoiding Jargon (when possible): While specialized terms can be necessary, over-reliance on jargon can alienate those slightly outside the immediate field. When jargon is used, it should be defined or explained.
- Providing Context: A good scientific description often includes background information or a literature review to give context to the study’s significance and position within the broader field.
- Transparency: Limitations, potential sources of error, and uncertainties should be clearly stated. This honesty allows readers to gauge the reliability of the findings and understand any potential caveats.
A good scientific description is essentially a balance between precision and clarity. It should be detailed enough to be of value to experts in the field while still being accessible to those with a general background in the discipline.
The overview of criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of chatGPT4 it is not easily to see, whether these criteria can also applied to the output of chatGPT4.
Lets transform this into a question:
Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.
That’s an insightful observation. Let’s discuss the applicability of some of these criteria to ChatGPT (like me) and its outputs:
- Replicability: While the underlying research and development process of models like mine can be replicated in theory, the specific outputs for each query might vary slightly, even with similar inputs. Therefore, individual responses from me aren’t precisely replicable.
- Citing Sources: I don’t provide direct citations for specific statements in my responses. My training data comes from diverse sources, and I generate responses based on patterns in that data. Thus, I can’t directly attribute my statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description.”
- Transparency: OpenAI, the organization behind me, has released details about the architecture, training methodology, and intentions behind models like mine. However, the specifics of the training data or exact processes that lead to a particular response aren’t transparently available in each interaction.
In light of this:
- For scholarly or academic purposes, my responses should be considered as starting points or general overviews. Critical information should always be cross-referenced with authoritative and primary sources.
- For general inquiries or casual information, my design is to provide accurate and coherent answers based on the vast amount of data I’ve been trained on.
It’s crucial to recognize the intended use and context when considering the applicability of scientific criteria to any tool or source, including ChatGPT.
 There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., https://zevedi.de/en/topics/ki-text-2/ . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.
 Gerd Doeben-Henisch, STATE CHANGE FROM NON-WRITING TO WRITING. WORKING WITH CHATGPT4 IN PARALLEL. AN EXPLORATION, https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/