Category Archives: inter-individual coordination

EMPIRICALLY TRUE?

Author: Gerd Doeben-Henisch

Contact: info@uffmm.org

Start: May 30, 2024

Last change: May 31, 2024

CONTEXT

This text is part of the text ‘Rebooting humanity’

(The German version can be found HERE)

Empirically True?

Hypotheses 2 – 4 …


With Hypothesis 1, a further paradox arises: If the structure of our human body (including its brain) is designed such that there is no direct, permanent one-to-one mapping of the real physical world outside the brain into the internal states of the body (including the brain), how can humans then make and use ’empirically true statements’ about something outside the body or outside the brain?

In everyday life, we can all have the following experiences:

When at least two people are involved and they have no special limitations, we can distinguish the following cases:

  1. There is an object with certain properties that the involved persons can perceive sensibly. Then one person A can say: ‘There is an object X with properties Y.’ And another person B can say: ‘Yes, I agree.’
  2. A certain object X with properties Y cannot be sensibly perceived by the involved persons. Then one person A can say: ‘The object X with properties Y is not there.’ And another person B can say: ‘Yes, I agree.’
  3. There is an object with certain properties that the involved persons can perceive sensibly, which they have never seen before. Then one person A can say: ‘There is an object with properties, which I do not know yet. This is new to me.’ And another person B can then say: ‘Yes, I agree.’
  4. A certain object X with properties Y cannot currently be sensibly perceived by the involved persons, but it was there before. Then one person A can say: ‘The object X with properties Y is no longer there.’ And another person B can say: ‘Yes, I agree.’

Introduction of Hypothesis 2
Case 1 becomes understandable if we assume that the sensory stimuli from object X with properties Y lead to activations in the sense organs, generating a sensory perception that can persist for the duration of object X’s presence.

To identify and classify this temporary perception as an ‘object of type X with properties Y,’ the involved persons must have a ‘memory’ that holds an ‘abstract object of type X with properties Y’ ready.

The ‘realized agreement’ between the perception of object X and the memory of a corresponding abstract object X then allows for the decision that there is a current perception of the abstract object X, whose ‘perceived properties’ ‘sufficiently match’ the ‘abstract properties.’

Important: this agreement occurring in the brain between a perceived object and a remembered object X does not imply anything about the real concrete circumstances that led to the perception of the object.[1]

This situation describes what is meant by Hypothesis 2: Persons can recognize a perceived object as an object of type X with properties Y if they have a memory available at the moment of the current perception.

Important: This Hypothesis 2 refers so far to what happens with and within an individual person. Another person normally cannot know about these processes. Internal processes in persons are — so far — not perceivable by others.[2]

[1] Modern simulation techniques can be so ‘real’ for most people that they make it difficult, if at all possible, to discern the ‘difference’ from the real world based solely on sensory perception. This would be the case where a sensory perception and a remembered abstract object in the brain show a substantial agreement, although there is no ‘real’ empirical object triggering the perception. … The computer itself, which ‘simulates’ something in a manner which looks for an observer ‘like being real’ (or the technical interface through which the computer’s signal reaches human sensors), is nevertheless a ‘real machine’ addressing the human sens organ ‘from the outside’.

[2] Even if modern neuroscientific measuring techniques can make electrical and chemical properties and activities visible, it is — so far — never possible to directly infer the functionalities hidden therein from these activities. Analogously, if one measures the electrical activities of the chips in a computer (which is possible and is done), one can never infer the algorithms currently being executed, even if one knows these algorithms!

Introduction of Hypothesis 3
Case 1 also includes the aspect that person A ‘verbally communicates’ something to person B. Without this verbal communication, B would know nothing about what is happening in A. In everyday life, a person usually perceives more than just one object, possibly many objects simultaneously. Therefore, knowing that a person is referring to a specific object and not one of the many other objects is not self-evident.

In Case 1, it should be stated: A person A says, “There is an object X with properties Y.” And another person B says, “Yes, I agree.”

When a person ‘says’ something that all participants recognize as ‘elements of language L,’ these elements of language L are ‘sounds,’ i.e., sound waves that are generated on one hand by a speaking organ (with a mouth) and received on the other side by an ‘ear.’ Let’s simply call the generating organ ‘actor’ and the receiving organ ‘sensor.’ Then, in verbal communication, a person produces sounds with an actor, and the participant of the communication receives these sounds through his sensor.

It is, of course, clear that the spoken and then also heard sounds of a language L have directly no relation to the internal processes of perception, remembering, and the ‘agreement process’ of perception and memory. However, it can be assumed that there must be ‘internal neural processes’ in the speaker and listener that must correspond to the generated sounds, otherwise the actor could not act.[1] In the case of the sensor, it was already pointed out earlier how stimuli from the outside world lead to activations of neurons, creating a flow of neural signals.

As it was generally assumed that there are neural signal flows and different abstract structures of objects that can be ‘internally’ stored and further processed, something similar must be assumed for the neural encoding of spoken and heard sounds. If one can distinguish elements and certain combinations of elements in the spoken acoustic sound material of a language, it is plausible to assume that these externally identifiable structures are also found in the internal neural realization.

The core idea of Hypothesis 3 can then be formulated as follows: There is a neural counterpart to the acoustically perceivable structure of a language L, which moreover is the ‘active’ part in producing spoken language and in ‘translating’ spoken sounds into the corresponding neural representations.

[1] The human speech organ is a highly complex system in which many systems work together, all of which must be neuronally controlled and coordinated.

Introduction of Hypothesis 4
With Hypothesis 2 (memory, comparison of perception and memory) and Hypothesis 3 (independent sound system of a language), the next Hypothesis 4 arises, suggesting that there must be some ‘relationship’ (mathematically: a mapping) between the sound system of a language L and the memorable objects along with current perception. This mapping allows ‘sounds’ to be connected with ‘objects (including properties)’ and vice versa.

In Case 1, person A has the perception of an object X with properties Y, along with a memory that ‘sufficiently matches,’ and person A says: “There is an object X with properties Y.” Another person B says, “Yes, I agree.”

Given the diversity of the world, constant changes, and the variety of possible sound systems [1], as well as the fact that humans undergo massive growth processes from embryo to continually developing person, it is unlikely that possible relationships between language sounds and perceived and remembered objects are ‘innate.’

This implies that this relationship (mapping) between language sounds and perceived and memorable objects must develop ‘over time,’ often referred to as ‘learning.’ Without certain presets, learning can be very slow; with available presets, it can be much faster. In the case of language learning, a person typically grows up in the presence of other people who generally already practice a language, which can serve as a reference system for growing individuals.

Language learning is certainly a lengthy process that includes not only individual acquisition but also inter-individual coordination among all those who practice a specific language L together.

As a result, learning a language means that not only is the ‘structure of the sound system’ learned, but also the association of elements of the sound system with elements of the perception-memory structure.[2]

In Case 1, therefore, person A must know which sound structure in the application group for language L is used for an object X with properties Y, and so must person B. If A and B have the same ‘relationship knowledge’ of sounds to objects and vice versa, person B can ‘understand’ the verbal expression of A “There is an object X with properties Y” because he also has a perception of this object and remembers an object X that sufficiently matches the perceived object, and he would name this fact in the same way A did. Then person B can say, “Yes, I agree.”

[1] Consider the many thousands of languages that still exist on planet Earth, where different languages can be used in the same living environment. The same ‘perception objects’ can thus be named differently depending on the language.

[2] The study of these matters has a long history with very, very many publications, but there is not yet a universally accepted unified theory.

–!! Not finished yet !!–