To work within the Generative Cultural Anthropology [GCA] Theory one needs a practical tool which allows the construction of dynamic world models, the storage of these models, their usage within a simulation game environment together with an evaluation tool. To prepare a simulation game within a Hybrid Simulation Game Environment [HSGE] one needs an
iterative development process which is described below.
CASE STUDY – SIMULATION GAMES – PHASE 1: Iterative Development of a Dynamic World Model – Part of the Generative Cultural Anthropology [GCA] Theory
1 Overview of the Whole Development Process
2 Cognitive Aspects of Symbolic Expressions
3 Symbolic Representations and Transformations
4 Abstract-Concrete Concepts
5 Implicit Structures Embedded in Experience
5.1 Example 1
In this section several case studies will be presented.In Case Study 1 it will be shown, how the DAAI paradigm can be mapped into a new concept of Generative Cultural Anthropology [GCA]. Then it will be shown how it is possible to implement this GCA in a way which allows all kinds of cultural processes including those dealing with city planning and the participation of citizens in this process. The working title of the software project related to this is komega. Go back to the overall framework.
COLLECTION OF PAPERS
(The order of the different papers corresponds somehow to the general ordr of the project)
FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
A first GCA open research project [GCA-OR No.1]. This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors where certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly not separated from society (April-9, 2020): case1-population-start-part0-v1
Bootstrapping some Citizens. This paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
Changes: July 20.2019 (Rewriting the introduction)
This Philosophy Lab section of the uffmm science blog is the last extension of the uffmm blog, happening July 2019. It has been provoked by the meta reflections about the AAI engineering approach.
SCOPE OF SECTION
This section deals with the following topics:
How can we talk about science including the scientist (and engineer!) as the main actors? In a certain sense one can say that science is mainly a specific way how to communicate and to verify the communication content. This presupposes that there is something called knowledge located in the heads of the actors.
The presupposed knowledge usually is targeting different scopes encoded in different languages. The language enables or delimits meaning and meaning objects can either enable or delimit a certain language. As part of the society and as exemplars of the homo sapiens species scientists participate in the main behavior tendencies to assimilate majority behavior and majority meanings. This can reduce the realm of knowledge in many ways. Biological life in general is the opposite to physical entropy by generating auotopoietically during the course of time more and more complexity. This is due to a built-in creativity and the freedom to select. Thus life is always oscillating between conformity and experiment.
The survival of modern societies depends highly on the ability to communicate with maximal sharing of experience by exploring fast and extensively possible state spaces with their pros and cons. Knowledge must be round the clock visible to all, computable, modular, constructive, in the format of interactive games with transparent rules. Machines should be re-formatted as primarily helping humans, not otherwise around.
To enable such new open and dynamic knowledge spaces one has to redefine computing machines extending the Turing machine (TM) concept to a world machine (WM) concept which offers several new services for social groups, whole cities or countries. In the future there is no distinction between man and machine because there is a complete symbiotic unification because the machines have become an integral part of a personality, the extension of the body in some new way; probably far beyond the cyborg paradigm.
The basic creativity and freedom of biological life has been further developed in a fundamental all embracing spirituality of life in the universe which is targeting a re-creation of the whole universe by using the universe for the universe.
Review of EU’s trustworthy AI Ethic with Denning & Denning (2020) and other authors from the point of view of GCA theory (May-11, 2020).
Review of Tsu and Nourbakhsh (2020), When Human-Computer Interaction Meets Community Citizen Science. Empowering communities through citizen science. In the Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM 2017: review-Tsu-et-2020-acm-CommunitySciences (April-6, 2020)
Review of Alan Newell and Herbert A.Simon (1972), Human Problem Solving (Last update: Oct 9, 2019): review-newell-simon-1972-V1-4 Comment: This document will be replaced several times by the next extended version with the discussion of the text. One document spans in the end one complete chapter.
Review of Peter Gärdenfors (2014), Geometry of Meaning. Semantics Based on Conceptual Spaces, Part 1, A Review from a Philosophical Point of View: review-gaerdenfors2014-c1-2
Review of Charles R.Gallistel, (1990), The Organization of Learning. Part 1, A Review from a Philosophical Point of View: review-gallistel-part1-C1
Last change: 28.February 2019 (Several corrections)
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.
BOTTOM-UP: THE GENERAL BLUEPRINT
As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.
This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.
In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually need different durations to come into effect. These effects can happen as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a situation generates ‘successors’ to the actual situation. A sequence of situations generated by applied changes is usually called a ‘simulation‘.
If one allows the interaction between real actors with a simulation by associating a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).
One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.
Making new experiences with a simulation allows a continuous improvement of the model and its change rules.
Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.
EPISTEMOLOGY OF CONCEPTS
As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy usage of concepts to describe state of affairs.
The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.
While the literature is mainly dealing with more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.
The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain, which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.
The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts) to deal with ‘many’ concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can specify possible different ‘usages’ of the concept ‘chair’.
Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.
From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies inherently an ‘open set’ of ‘possible’ concrete exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage. Because we can ‘in principle’ check the real extensions of these abstract concepts in everyday life as long there is the ‘freedom’ to do such checks, we are losing the ‘truth’ of our concepts and thereby the basis for a realistic cooperation, if this ‘freedom of checking’ is not possible.
If some incoming perception is ‘not yet known’, because nothing given in the unconsciousness does ‘match’, it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.
THE DIMENSION OF MEANING
In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.
Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.
These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.
As far as conceptual structures are related via meaning relations to language expressions then a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures can exist (especially with children) without an available meaning relation.
When language expressions are used within a communicative act then their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of concepts alone does not automatically follow, that a concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or ‘not true‘.
THE DIMENSION OF TIME
The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.
Evidently the unconscious brain is able do exactly this: (i) it can arrange the different structures under certain conditions in a ‘timely order’; (ii) it can detect ‘differences‘ between timely succeeding structures; the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.
Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is depending from the degree of dependencies within the ‘generating criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).
Last corrections: 14.February 2019 (add some more keywords; added emphasizes for central words)
Change: 5.May 2019 (adding the the aspect of simulation and gaming; extending the view of the driving actors)
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the blueprint of the whole AAI analysis process. Here I leave out the topic of actor models (AM); the aspect of simulation and gaming is mentioned only shortly. For these topics see other posts.
THE AAI ANALYSIS BLUEPRINT
The Actor-Actor Interaction (AAI)analysis is understood here as part of an embracing systems engineering process (SEP), which starts with the statement of a problem (P) which includes a vision (V) of an improved alternative situation. It has then to be analyzed how such a new improved situation S+ looks like; how one can realize certain tasks (T) in an improved way.
The driving actors for such an AAI analysis are at least one stakeholder (STH) which communicates a problem P and an envisioned solution (ES) to an expert (EXPaai) with a sufficient AAI experience. This expert will take the lead in the process of transforming the problem and the envisioned solution into a working solution (WS).
In the classical industrial case the stakeholder can be a group of managers from some company and the expert is also represented by a whole team of experts from different disciplines, including the AAI perspective as leading perspective.
In another case which I will call here the communal case — e.g. a whole city — the stakeholder as well as the experts are members of the communal entity. As in the before mentioned cases there is some commonly accepted problem P combined with a first envisioned solution ES, which shall be analyzed: what is needed to make it working? Can it work at all? What are costs? And many other questions can arise. The challenge to include all relevant experience and knowledge from all participants is at the center of the communication and to transform this available knowledge into some working solution which satisfies all stated requirements for all participants is a central condition for the success of the project.
It has to be taken into account that the driving actors are able to do this job because they have in their bodies brains (BRs) which in turn include some consciousness (CNS). The processes and states beyond the consciousness are here called ‘unconscious‘ and the set of all these unconscious processes is called ‘the Unconsciousness’ (UCNS).
An important set of substructures of the unconsciousness are those which enable symbolic language systems with so-called expressions (L) on one side and so-called non-expressions (~L) on the other. Embedded in a meaning relation (MNR) does the set of non-expressions ~L function as the meaning (MEAN) of the expressions L, written as a mapping MNR: L <—> ~L. Depending from the involved sensors the expressions L can occur either as acoustic events L_spk, or as visual patternswritten L_txt or visual patterns as pictures L_pict or even in other formats, which will not discussed here. The non-expressions can occur in every format which the brain can handle.
While written (symbolic) expressions L are only associated with the intended meaning through encoded mappings in the brain, the spoken expressions L_spk as well as the pictorial ones L_pict can show some similarities with the intended meaning. Within acoustic expressions one can ‘imitate‘ some sounds which are part of a meaning; even more can the pictorial expressions ‘imitate‘ the visual experience of the intended meaning to a high degree, but clearly not every kind of meaning.
DEFINING THE MAIN POINT OF REFERENCE
Because the space of possible problems and visions it nearly infinite large one has to define for a certain process the problem of the actual process together with the vision of a ‘better state of the affairs’. This is realized by a description of he problem in a problem document D_p as well as in a vision statement D_v. Because usually a vision is not without a given context one has to add all the constraints (C) which have to be taken into account for the possible solution. Examples of constraints are ‘non-functional requirements’ (NFRs) like “safety” or “real time” or “without barriers” (for handicapped people). Part of the non-functional requirements are also definitions of win-lose states as part of a game.
AAI ANALYSIS – BASIC PROCEDURE
If the AAI check has been successful and there is at least one task T to be done in an assumed environment ENV and there are at least one executing actor A_exec in this task as well as an assisting actor A_ass then the AAI analysis can start.
ACTOR STORY (AS)
The main task is to elaborate a complete description of a process which includes a start state S* and a goal state S+, where the participating executive actors A_exec can reach the goal state S+ by doing some actions. While the imagined process p_v is a virtual (= cognitive/ mental) model of an intended real process p_e, this intended virtual model p_e can only be communicated by a symbolic expressions L embedded in a meaning relation. Thus the elaboration/ construction of the intended process will be realized by using appropriate expressions L embedded in a meaning relation. This can be understood as a basic mapping of sensor based perceptions of the supposed real world into some abstract virtual structures automatically (unconsciously) computed by the brain. A special kind of this mapping is the case of measurement.
In this text especially three types of symbolic expressions L will be used: (i) pictorial expressions L_pict, (ii) textual expressions of a natural language L_txt, and (iii) textual expressions of a mathematical language L_math. The meaning part of these symbolic expressions as well as the expressions itself will be called here an actor story (AS) with the different modes pictorial AS (PAS), textual AS (TAS), as well as mathematical AS (MAS).
The basic elements of an actor story (AS) are states which represent sets of facts. A fact is an expression of some defined language L which can be decided as being true in a real situation or not (the past and the future are special cases for such truth clarifications). Facts can be identified as actors which can act by their own. The transformation from one state to a follow up state has to be described with sets of change rules. The combination of states and change rules defines mathematically a directedgraph (G).
Based on such a graph it is possible to derive an automaton (A) which can be used as a simulator. A simulator allows simulations. A concrete simulation takes a start state S0 as the actual state S* and computes with the aid of the change rules one follow up state S1. This follow up state becomes then the new actual state S*. Thus the simulation constitutes a continuous process which generally can be infinite. To make the simulation finite one has to define some stop criteria (C*). A simulation can be passive without any interruption or interactive. The interactive mode allows different external actors to select certain real values for the available variables of the actual state.
If in the problem definition certain win-lose states have been defined then one can turn an interactive simulation into a game where the external actors can try to manipulate the process in a way as to reach one of the defined win-states. As soon as someone (which can be a team) has reached a win-state the responsible actor (or team) has won. Such games can be repeated to allow accumulation of wins(or loses).
Gaming allows a far better experience of the advantages or disadvantages of some actor story as a rather lose simulation. Therefore the probability to detect aspects of an actor story with their given constraints is by gaming quite high and increases the probability to improve the whole concept.
Based on an actor story with a simulator it is possible to increase the cognitive power of exploring the future even more. There exists the possibility to define an oracle algorithm as well as different kinds of intelligent algorithms to support the human actor further. This has to be described in other posts.
TAR AND AAR
If the actor story is completed (in a certain version v_i) then one can extract from the story the input-output profiles of every participating actor. This list represents the task-induced actor requirements (TAR). If one is looking for concrete real persons for doing the job of an executing actor the TAR can be used as a benchmark for assessing candidates for this job. The profiles of the real persons are called here actor-actor induced requirements (AAR), that is the real profile compared with the ideal profile of the TAR. If the ‘distance’ between AAR and TAR is below some threshold then the candidate has either to be rejected or one can offer some training to improve his AAR; the other option is to change the conditions of the TAR in a way that the TAR is more closer to the AARs.
The TAR is valid for the executive actors as well as for the assisting actors A_ass.
If the actor story has in some version V_i a certain completion one has to check whether the different constraints which accompany the vision document are satisfied through the story: AS_vi |- C.
Such an evaluation is only possible if the constraints can be interpreted with regard to the actor story AS in version vi in a way, that the constraints can be decided.
For many constraints it can happen that the constraints can not or not completely be decided on the level of the actor story but only in a laterphase of the systems engineering process, when the actor story will be implemented in software and hardware.
MEASURING OF USABILITY
Using the actor story as a benchmark one can test the quality of the usability of the whole process by doing usability tests.
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic of the virtual meaning and the actor story.
VIRTUAL MEANING AND ACTOR STORY
In a textual actor story (TAS) using the symbolic expressions L0 of some everyday language one can describe a state with a finite set of facts which should be decidable ‘in principle’ when related to the supposed external empirical environment of the problem P. Thus the constraint of ‘operational decidability‘ with regard to an empirical external environment imposes some constraint of the kinds of symbolic expressions which can be used. If there is more than only one state (which is the usual case) then one has to provide a list of ‘possible changes‘. Each change is described with a symbolic expression L0x. The content of a change is at least one fact which will change between the ‘given’ state and the ‘succeeding’ state. Thus the virtual meaning of an actor must enable the actor to distinguish between a ‘given state’ q_now and a possible ‘succeeding state’ q_next. There can be more than one possible change with regard to a given state q_now. Thus a textual actor story (TAS) is a set of states connected by changes, all represented as finite collections of symbolic expressions.
In a pictorial actor story (PAS) using the graphical expressions Lg of some everyday pictorial langue one can describe a state with a finite set of facts realized as pictures of objects, properties as well as relations between these objects. The graphs of the objects can be enhanced by graphs including symbolic expressions L0 of some everyday language. Again it should be decidable ‘in principle’ whether these pictorial facts can be related to the suppose external empirical environment of the problem P. Thus the constraint of ‘operational decidability’ with regard to an empirical external environment imposes some constraint of the kinds of symbolic expressions which can be used. If there is more than only one state (which is the usual case) then one has to provide a list of ‘possible changes’. Each change is described with an expression Lgx. The content of a change is at least one fact which will change between the ‘given’ state and an ‘succeeding’ state. Thus the virtual meaning of an actor must enable the actor to distinguish between a ‘given state’ q_now and a possible ‘succeeding state’ q_next. There can be more than one possible change with regard to a given state q_now. Thus a pictorial actor story (TAS) is a set of states connected by changes, all represented as finite collections of graphical expressions.
In the case of a mathematical actor story (MAS) one has to distinguish two cases: (i) a complete formal description or (ii) a graphical presentation enhanced with symbolic expressions.
In case (i) it is similar to the textual mode but replacing the symbolic expressions L0 of some everyday langue with the symbolic expressions Lm of some mathematical language. In this book we are using predicate logic syntax with a new semantics. In case (ii) one describes the actor story as a mathematical directed graph. The nodes (vertices) of the graph are understood as ‘states’ and the arrows connecting the nodes are understood as changes. A node representing a state can be attached to a finite set of facts, where a fact is a symbolic expression Lm representing objects, properties as well as relations between these objects. Again it should be decidable ‘in principle’ whether these facts can be related to the suppose external empirical environment of the problem P. Thus the constraint of ‘operational decidability’ with regard to an empirical external environment imposes some constraint of the kinds of symbolic expressions which can be used. If there is more than only one state (which is the usual case) then one has to use arrows which are labeled by symbolic change expressions Lmx. The content of a change is at least one fact which will change between the ‘given’ state and an ‘succeeding’ state. Thus the virtual meaning of an actor must enable the actor to distinguish between a ‘given state’ q_now and a possible ‘succeeding state’ q_next. There can be more than one possible change with regard to a given state q_now.
If the complete actor story is given, then there is no need for the additional change expressions LX because one can infer the changes from the pairs of the succeeding states directly. But if one wants to ‘generate’ an actor story beginning with the start state then one needs the list of change expressions.
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic of the meaning of (symbolic) expressions.
MEANING: REAL AND VIRTUAL
In semiotic terminology the ‘meaning‘ of a symbolic expression corresponds to the image of the mapping from symbolic expressions (L) into something else (non-L). This mapping is located in that system which is using this mapping. We can call this system a ‘semiotic system‘.
For the generation of an actor story we assume that the AAI experts as well as all the other actors collaborating with the AAI actors are input-output systems with changeable internal states (IS) as well as a behavior function (phi), written as phi: I x IS —> IS x O.
These actors are embedded in an empirical environment (ENV) which is continuously changing.
Parts of the environment can interact with the actors by inducing physical state-changes in parts of the actors (Stimuli (S), Input, (I)) as well as receiving physical responses from the actors (Responses (R), output (O)) which change parts of the environmental states.
Interpreting these actors as ‘semiotic systems’ implies that the actors can receive as input symbolic expressions (L) as well as non-symbolic events and they can output symbolic expressions (L) as well as some non-symbolic events (non-L). Furthermore the mapping from symbolic expressions into something else is assumed to happen ‘inside‘ the system.
From a 3rd-person view one can distinguish the empirical environment external to the actor as well as the empirical states ‘inside’ the system (typically investigated by Physiology with many sub-disciplines).
The internal states on the cellular level have a small subset called ‘brain’ (less than 1% of all cellular elements). A subset of the brain cells is enabling what in a 1st person view is called ‘consciousness‘. The ‘content’ of the consciousness consists of ‘phenomena‘ which are not ’empirical’ in the usual sense of ’empirical’. Using the consciousness as point of reference everything else of the actor which is not part of the conscious is ‘not conscious‘ or ‘unconscious‘. The ‘unconsciousness‘ is then the set of all non-conscious states of the actor (which means in the biological case of human sapiens more than 99% of all body states).
As empirical sciences have revealed there exist functional relations between empirical states of the external environment (S_emp) and the set of externally caused internal unconscious input states of the actor (IS_emp_uc).
The internally caused unconscious input states (IS_emp_uc) are further processed and mapped in a variety of internal unconscious states (IS_emp_uc_abstr), which are more ‘general’ as the original input states. Thus subsets of internally cause unconscious internal states IS_emp_uc can be elements of the more abstract internal states IS_emp_uc_abstr.
These internal unconscious states are part of ‘networks‘ and parts of different kinds of ‘hierarchies‘.
There are many different kinds of internal operations working on these internal structures including the input states.
Parts of the internal structures can function as ‘meaning‘ (M) for other parts of internal structures which function as ‘symbolic expressions‘ (L). Symbolic expressions together with the meaning constituting elements can be used from an actor (seen as a semiotic system) as a ‘symbolic language‘ whose observable part are the ‘symbols’ (written, spoken, gestures, …) and whose non-observable part is the mapping relation (encoding) from symbols into the internal meaning elements.
The primary meaning of a language is therefore a ‘virtual world of states inside the actor‘ compared to the ‘external empirical world‘. Parts of the virtual meaning world can correspond to parts of the empirical world outside. To control such an important relationship one needs commonly defined empirical measurement procedures (MP) which are producing external empirical events which can be repeatedly perceived by a population of actors, which can compare these processes and events with their 1st person conscious phenomena (PH). If it is possible for an actor (an observer) to identify those phenomena which correspond to the external measurement events than it is possible (in principle) to define that subset of Phenomena (PH_emp) which are phenomena but are correlated with events in the external empirical world. Usually those phenomena which correspond to empirical events external PH_emp are a true subset of the set of all possible Phenomena, written as PH_emp ⊂ PH.
While the empirical phenomena PH_emp are ‘concrete‘ phenomena are the non-empirical phenomena PH_abs = PH-PH_emp ‘abstract‘ in the sense that an empirical phenomenon p_emp can be an element of a non-empirical phenomenon p_abs if p_emp is not new.
While the virtual meaning of a symbolic language is realized by abstract structures which can be ‘cited’ in the consciousness as p_abs, the empirical meaning instead occurs as concrete structures which can be ‘cited’ by the consciousness.
All meaning elements can occur as part of a virtual spatial structure (VR) and as part of a virtual timely structure (VT).
There is no 1-to-1 mapping from the spatial and timely structures of the external empirical world into the virtual internal world of meanings.
If it is possible to correlate virtual meaning structures with external empirical structures we call this ’empirical soundness’ or ’empirical truth’.
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the generation of the actor story (AS).
To get from the problem P to an improved configuration S measured by some expectation E needs a process characterized by a set of necessary states Q which are connected by necessary changes X. Such a process can be described with the aid of an actor story AS.
The target of an actor story (AS) is a full specification of all identified necessary tasks T which lead from a start state q* to a goal state q+, including all possible and necessary changes X between the different states M.
A state is here considered as a finite set of facts (F) which are structured as an expression from some language L distinguishing names of objects (like ‘D1’, ‘Un1’, …) as well as properties of objects (like ‘being open’, ‘being green’, …) or relations between objects (like ‘the user stands before the door’). There can also e a ‘negation’ like ‘the door is not open’. Thus a collection of facts like ‘There is a door D1’ and ‘The door D1 is open’ can represent a state.
Changes from one state q to another successor state q’ are described by the object whose action deletes previous facts or creates new facts.
In this approach at least three different modes of an actor story will be distinguished:
A textual mode generating a Textual Actor Story (TAS): In a textual mode a text in some everyday language (e.g. in English) describes the states and changes in plain English. Because in the case of a written text the meaning of the symbols is hidden in the heads of the writers it can be of help to parallelize the written text with the pictorial mode.
A pictorial mode generating a Pictorial Actor Story (PAS). In a pictorial mode the drawings represent the main objects with their properties and relations in an explicit visual way (like a Comic Strip). The drawings can be enhanced by fragments of texts.
A mathematical mode generating a Mathematical Actor Story (MAS): this can be done either (i) by a pictorial graph with nodes and edges as arrows associated with formal expressions or (ii) by a complete formal structure without any pictorial elements.
For every mode it has to be shown how an AAI expert can generate an actor story out of the virtual cognitive world of his brain and how it is possible to decide the empirical soundness of the actor story.