STARTING WITH PYTHON3 – The very beginning – part 5

Journal: uffmm.org,
ISSN 2567-6458, July 18-19, 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:
gerd@doeben-henisch.de

CONTEXT

This is the next step in the python3 programming project. The overall context is still the python Co-Learning project.

SUBJECT

After a first clearing of the environment for python programming we have started with the structure of the python programming language, and in this section will continue dealing with the object type sequences and string and more programming elements are shown in a simple example of a creative actor.

Remark: for general help information go directly to the python manuals, which you can find associated with the entry for python 3.7.3 if you press the Windows-Button, look to the list of Apps (= programs), and identify the entry for python 3.7.3. If you open the python entry by clicking you see the sub-entry python 3.7.3 Manuals. If you click on this sub-entry the python documentation will open. In this documentation you can find nearly everything you will need. For Beginners you even find a small tutorial.

SZENARIO

For the further discussion of additional properties of python string and sequence objects I will assume again a simple scenario. I will expand the last scenario with the simple input-output actor by introducing some creativity into the actor. This means that the actor receives again either one word or sequences of words but instead of classifying the word according to some categories or instead of giving back the list of the multiple words as individual entities the actor will change the input creatively.
In case of a single word the actor will re-order the symbols of the string and additionally he can replace one individual symbol by some random symbol out of a finite alphabet.
In case of multiple words the actor will first partition the sequence of words into the individual words in a list, then he will also re-order these items of the list, will then re-order the letters in the words, and finally he can replace in every word one individual symbol by some random symbol out of a finite alphabet. After these operations the list is again concatenated to one sequence of words.
In this version of the program one can repeat in two ways: either (i) input manually new words or (ii) redirect the output into the input that the actor can continue to change the output further.
Interesting feature Cognitive Entropy: If the user selects always the closed world option then the set of available letters will not be expanded during all the repetitions. This reveals then after some repetitions the implicit tendency of all words to become more and more equal until only one type of word ‘survived’. This depends on the random character of the process which increases the chances of the bigger numbers to overrun the smaller ones. The other option is the open world option. This includes that in a repetition a completely new letter can be introduced in a single word. This opposes the implicit tendency of cognitive entropy to enforce the big numbers against the smaller ones.

How can this scenario be realized?

ACTOR STORY

1. There is a user (as executive actor) who can enter single or multiple words into the input interface of an assisting interface.
2. After confirming the input the assisting actor will respond in a creative manner. These creativity is manifested in changed orders of symbols and words as well as by replaced symbols.
3. After the response the user can either repeat the sequence or he can stop. If repeating then he can select between two options: (i) enter manually new words as input or (ii) redirect the output of the system as new input. This allows a continuous creative change of the words.
4. The repeated re-direction offers again two options: (i) Closed world, no real input, or (ii) Open world; some real new input

IMPLEMENTATION

Download here the python source code. This text appears as an HTML-document, because the blog-software does not allow to load a python program file directly.

stringDemo2.py

DEMOS

Single word in a closed world:

PS C:\Users\gerd_2\code> python stringDemo2.py
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
1
Closed world =’1′ or Open world =’2′
1
Input a single word
abcde
Your input word is = abcde
New in-word order with worder():
ebaca
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ebaca
New in-word order with worder():
ccbaa
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ccbaa
New in-word order with worder():
ccccb
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ccccb
New in-word order with worder():
ccccc
STOP = ‘N’, CONTINUE != ‘N’

The original word ‘abcde’ has been changed to ‘ccccc’ in a closed world environment. If one introduces an open world scenario then this monotonicity can never happen.

Multiple words in a closed world

PS C:\Users\gerd_2\code> python stringDemo2.py
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
1
Closed world =’1′ or Open world =’2′
1
Input multiple words
abc def geh
Your input words are = abc def geh
List version of sqorder input =
[‘abc’, ‘def’, ‘geh’]
New word order in sequence with sqorder():
def geh geh
List version of input in mcworder()=
[‘def’, ‘geh’, ‘geh’]
New in-word order with worder():
fef
New in-word order with worder():
hee
New in-word order with worder():
ege
New word-sequence order :
fef hee ege
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = fef hee ege
List version of sqorder input =
[‘fef’, ‘hee’, ‘ege’]
New word order in sequence with sqorder():
fef fef ege
List version of input in mcworder()=
[‘fef’, ‘fef’, ‘ege’]
New in-word order with worder():
fff
New in-word order with worder():
fee
New in-word order with worder():
eee
New word-sequence order :
fff fee eee
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = fff fee eee
List version of sqorder input =
[‘fff’, ‘fee’, ‘eee’]
New word order in sequence with sqorder():
eee fee fee
List version of input in mcworder()=
[‘eee’, ‘fee’, ‘fee’]
New in-word order with worder():
eee
New in-word order with worder():
eef
New in-word order with worder():
eee
New word-sequence order :
eee eef eee
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = eee eef eee
List version of sqorder input =
[‘eee’, ‘eef’, ‘eee’]
New word order in sequence with sqorder():
eee eee eee
List version of input in mcworder()=
[‘eee’, ‘eee’, ‘eee’]
New in-word order with worder():
eee
New in-word order with worder():
eee
New in-word order with worder():
eee
New word-sequence order :
eee eee eee
STOP = ‘N’, CONTINUE != ‘N’

You can see that the cognitive entropy replicates with the closed world assumption in the multi-word scenario too.

EXERCISES

Here are some details of objects and operations.

Letters and Numbers

With  ord(‘a’) one can get the decimal code of the letter as ’97’ and the other way around one can translate a decimal number ’97’ in a letter with  chr(97) to ‘a’.  For ord(‘z’) one gets ‘122’, and then one can use the numbers to compute characters which has been used in the program to find random characters to be inserted in a word.

Strings and Lists

There are some operations only avalable for list-objects and others only for string-objects.  Thus to change and re-arrange a string directly is not possible, but translating a string in a list, then apply some operations, and then transfer the changed list back into a string, this works fine. Thus translate a word w into a list wl by wl = list(w) allows the re-order of these elements by appending: wll.append(wl[r]). Afterwords I have translated the list again in a string by constructing a new string wnew by concatenating all letters step by step: wnew=wnew+wl[i]. If yould try to transfer the list directly like in the following example, then you will get as a result again  list:

>> w=’abcd’
>>> wl=list(w)
>>> wl
[‘a’, ‘b’, ‘c’, ‘d’]
>>> wn=str(wl)
>>> wn
“[‘a’, ‘b’, ‘c’, ‘d’]”

Immediate Help

If one needs direct information about the operations which are possible with a certain object like here the string object ‘w’, then one can ask for all possible operation like this:

>> dir(w)
[‘__add__’, ‘__class__’, ‘__contains__’, ‘__delattr__’, ‘__dir__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__ge__’, ‘__getattribute__’, ‘__getitem__’, ‘__getnewargs__’, ‘__gt__’, ‘__hash__’, ‘__init__’, ‘__init_subclass__’, ‘__iter__’, ‘__le__’, ‘__len__’, ‘__lt__’, ‘__mod__’, ‘__mul__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__rmod__’, ‘__rmul__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘capitalize’, ‘casefold’, ‘center’, ‘count’, ‘encode’, ‘endswith’, ‘expandtabs’, ‘find’, ‘format’, ‘format_map’, ‘index’, ‘isalnum’, ‘isalpha’, ‘isascii’, ‘isdecimal’, ‘isdigit’, ‘isidentifier’, ‘islower’, ‘isnumeric’, ‘isprintable’, ‘isspace’, ‘istitle’, ‘isupper’, ‘join’, ‘ljust’, ‘lower’, ‘lstrip’, ‘maketrans’, ‘partition’, ‘replace’, ‘rfind’, ‘rindex’, ‘rjust’, ‘rpartition’, ‘rsplit’, ‘rstrip’, ‘split’, ‘splitlines’, ‘startswith’, ‘strip’, ‘swapcase’, ‘title’, ‘translate’, ‘upper’, ‘zfill’]
>>>

In the case that ‘w’ is a sequence of strings/ words like w=’abc def’, then does the list operations be of no help, because one gets a list of letters, not of words:

>> wl2=list(w)
>>> wl2
[‘a’, ‘b’, ‘c’, ‘ ‘, ‘d’, ‘e’, ‘f’]

For the program one needs a list of single words. Looking to the possible operations with string objects with Dir() above, one sees the name ‘split’. We can ask, what this ‘split’ is about:

>>> help(str.split)
Help on method_descriptor:

split(self, /, sep=None, maxsplit=-1)
Return a list of the words in the string, using sep as the delimiter string.

sep
The delimiter according which to split the string.
None (the default value) means split according to any whitespace,
and discard empty strings from the result.
maxsplit
Maximum number of splits to do.
-1 (the default value) means no limit.

This sounds as if it could be of help. Indeed, that is the mechanism I have used:

>> w=’abc def’
>>> w
‘abc def’
>>> wl=w.split()
>>> wl
[‘abc’, ‘def’]

Function Definition

As you can see in the program text the inimal structure of a function definition is as follows:

def fname(Input-Arfuments):
     some commands
    [return VarNames]

The name is needed for the identification of the command, the input variables to get some values from the outside to work on and finally, but optionally, you can return the values of some variables back to the outside of the function.

The For-Loop

Besides the loop organized by the while-command there is the other command with a fixed number of repetitions indicated by the for-command:

for i in range(n):
commands

The operator ‘range()’ delivers a sequence of numbers from ‘0’ to ‘n-1’ and attaches these to the variable ‘i’. Thus the variable i takes one after the other all the numbers from range(). During one repetition all the commands will be executed which are listed after the for-command.

Random Numbers

In this program very heavily I have used random numbers. To be able to do this one has before this usage to import the random number library. I did this with the call:

import random as rnd

This introduces additionally an abbreviation ‘rnd’. Thus if one wants to cal a certain operation from the random object one can write like this:

r=rnd.randrange(0,n)

In this example one uses  the randrange() operation from random with the arguments (0,n) this means that an integer random number will be generated in the intervall [0,n-1].

If-Operator with Combined Conditions

In the program you can find statements like

if opt==’1′ and opt2==’1′ and opt3==’1′:

Following the if-keyword you see three different conditions

opt==’1′
opt2==’1′
opt3==’1′

which are put together to one expression by the logical operator ‘and’. This means that all three conditions must be simultaneously be true, otherwise this combined condition will not work.

Introduce the Import Module Mechanism

See for this the two files:

stringDemo2b.py
stringDemos.py

StringDemo2b.py is the same as stringDemo2.py discussed above but all the supporting functions are now removed from the main file and stored in an extra file called ‘stringDemos.py’ which works for the main file stringDemo2b.py as a module file. Tht this works there must be a special

import stringDemos as sd

command and at each occurence of a function call with functions from the imported module in the main module stringDemo2b.py one has to add the prefix ‘sd.’ indicating, that these functions are now located in a special place.

This import call does work only if the special path for the import module ‘stringDemos.py’ is visible to the python modulecall mechanisms. In this case the Path with the modul stringDemos.py is given as C:\Users\gerd_2\code. If one wants know what the actual path names are which are known to the python system one can use a system call:

>> import sys
>>> sys.path
>>> …

If the wanted path is not yet part of these given paths one can append the new path like this:

>> sys.path.append(‘C:\\Users\\gerd_2\\code’)

If this has all done rightly one can work with the program like before. The main advantage of this splitting of the main program and of the supporting functions is (i) a greater transparency of the main code and (ii) the supporting functions can now easily be used from other programs too if needed.

 

 

STARTING WITH PYTHON3 – The very beginning – part 4

Journal: uffmm.org,
ISSN 2567-6458, July 15, 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:
gerd@doeben-henisch.de

Change: July 16, 2019 (Some re-arrangement of the content :-))

CONTEXT

This is the next step in the python3 programming project. The overall context is still the python Co-Learning project.

SUBJECT

After a first clearing of the environment for python programming we have started with the structure of the python programming language, and in this section will deal with the object type string(s).

Remark: the following information about strings you can get directly from the python manuals, which you can find associated with the entry for python 3.7.3 if you press the Windows-Button, look to the list of Apps (= programs), and identify the entry for python 3.7.3. If you open the python entry by clicking you see the sub-entry python 3.7.3 Manuals. If you click on this sub-entry the python documentation will open. In this documentation you can find nearly everything you will need. For Beginners you even find a nice tutorial.

TOPIC: VALUES (OBJECTS) AS STRINGS

PROBLEM(s)

(1) When I see a single word (a string of symbols) I do not know which type this is in python. (2) If I have a statement with many words I would like to get from this a partition into all the single worlds for further processing.

VISION OF A SOLUTION

There is a simple software actor which can receive as input either single words or multiple words and which can respond by giving either the type of the received word or the list of the received multiple words.

ACTOR STORY (AS)

We assume a human user as executing actor (eA) and a piece of running software as an assisting actor (aA). For these both we assume the following sequence of states:

  1. The user will start the program by calling python and the name of the program.
  2. The program offers the user two options: single word or multiple words.
  3. The user has to select one of these options.
  4. After the selection the user can enter accordingly either one  or multiple words.
  5. The program will respond either with the recognized type in python or with a list of words.
  6. Finally asks the program the user whether he/she will continue or stop.
  7. Depending from the answer of the user the program will continue or stop.

IMPLEMENTATION

Here you can download the sourcecode: stringDemo1

# File stringDemo1.py
# Author: G.Doeben-Henisch
# First date: July 15, 2019

##################
# Function definition sword()

def sword(w1):
w=str(w1)
if w.islower():
print(‘Is lower\n’)
elif w.isalpha() :
print(‘Is alpha\n’)
elif w.isdecimal():
print(‘Is decimal\n’)
elif w.isascii():
print(‘Is ascii\n’)
else : print(‘Is not lower, alpha, decimal, ascii\n’)

##########################
# Main Programm

###############
# Start main loop

loop=’Y’
while loop==’Y’:

###################
# Ask for Options

opt=input(‘Single word =1 or multiple words =2\n’)

if opt==’1′:
w1=input(‘Input a single word\n’)
sword(w1) # Call for new function defined above

elif opt==’2′:
w1=input(‘Input multiple words\n’)
w2=w1.split() # Call for built-in method of class str
print(w2)

loop=input(‘To stop enter N\n’) # Check whether loop shall be repeated

DEMO

Here it is assumed that the code of the python program is stored in the folder ‘code’ in my home director.

I am starting the windows power shell (PS) by clicking on the icon. Then I enter the command ‘cd code’ to enter the folder code. Then I call the python interpreter together with the demo programm ‘stringDemo1.py’:

PS C:\Users\gerd_2\code> python stringDemo1.py
Single word =1 or multiple words =2

Then I select first option ‘Single word’ with entering 1:

1
Input a single word
Abrakadabra
Is alpha

To stop enter N

After entering 1 the program asks me to enter a single word.

I am entering the fantasy word ‘Abrakadabra’.

Then the program responds with the classification ‘Is alpha’, what is correct. If I want to stop I have to enter ‘N’ otherwise it contiues.

I want o try another word, therefore I am entering ‘Y’:

Y
Single word =1 or multiple words =2

I select again ‘1’ and the new menue appears:

1
Input a single word
29282726
Is decimal

To stop enter N

I entered a sequence of digits which has been classified as ‘decimal’.

I want to contiue with ‘Y’ and entering ‘2’:

Y
Single word =1 or multiple words =2
2
Input multiple words
Hans kommt meistens zu spät
[‘Hans’, ‘kommt’, ‘meistens’, ‘zu’, ‘spät’]
To stop enter N

I have entered a German sentence with 5 words. The response of the system is to identify every single word and generate a list of the individual words.

Thus, so far, the test works fine.

COMMENTS TO THE SOURCE CODE

Before the main program a new function ‘sword()’ has been defined:

def sword(w1):

The python keyword ‘def‘ indicates that here the definition of a function  takes place, ‘sword‘ is the name of this new function, and ‘w1‘ is the input argument for this function. ‘w1’ as such is the name of a variable pointing to some memory place and the value of this variable at this place will depend from the context.

w=str(w1)

The input variable w1 is taken by the operator str and str translates the input value into a python object of type ‘string’. Thus the further operations with the object string can assume that it is a string and therefore one can apply alle the operations to the object which can be applied to strings.

if w.islower():

One of these string-specifi operations is islower(). Attached to the string object ‘w’ by a dot-operator ‘.’ the operation ‘islower() will check, whether the string object ‘w’ contains lower case symbols. If yes then the following ‘print()’ operation will send this message to the output, otherwise the program continues with the next ‘elif‘ statement.

The ‘if‘ (and following the if the ‘elif‘) keyword states a condition (whether ‘w’ is of type ‘lower case symbols’). The statement closes with the ‘:’ sign. This statement can be ‘true’ or not. If it is true then the part after the ‘:’ sign will be executed (the ‘print()’ action), if false then the next condition ‘elif … :’ will be checked.

If no condition would be true then the ‘else: …’ statement would be executed.

The main program is organized as a loop which can iterate as long as the user does not stop it. This entails that the user can enter as many words or multi-words as he/ she wants.

loop=’Y’
while loop==’Y’:

In the first line the variable ‘loop’ receives as a value the string ‘Y’ (short for ‘yes’). In the next line starts the loop with the python key-word ‘while’ forming a condition statement ‘while … :’. This is similar to the contion statements above with ‘if …. :’ and ‘elif … :’.

The condition depends on the expression ‘loop == ‘Y” which means that  as long as the variable loop is logically equal == to the value ‘Y’ the loop condition  is ‘true’ and the part after the ‘:’ sign will be executed. Thus if one wants to break this loop one has to change the value of the variable ‘loop’ before the while-statement ‘while … :’ will be checked again. This check is done in the last line of the while-execution part with the input command:

loop=input(‘To stop enter N\n’)

Before the while-condition will be checked again there is this input() operator asking the user to enter a ‘N’ if he/ she wantds to stop. If the user  enters a  ‘N’  in the input line the result of his input will be stored in the variable called ‘loop’ and therefore the variable will have the value ‘==’N” which is different from ‘==’Y”. But what would happen if the user enters something different from ‘N’ and ‘Y’, because ‘Y’ is expected for repetition?

Because the user does not know that he/she has to enter ‘Y’ to continue the program will highly probably stop even if the user does not want to stop. To avoid this unwanted case one should change the code for the while-conition as follows:

while loop!=’N’:

This states that the loop will be true as long as the value of the loop variable is different != from the value ‘N’ which will explicitly asked from the user at the end of the loop.

The main part of the while-loop distinguishes two cases: single word or multiple words. This is realized by a new input() operation:

opt=input(‘Single word =1 or multiple words =2\n’)

The user can enter a ‘1’ or a ‘2’, which will be stored in the variable ‘opt’. Then a construction with an if or an elif will test which one of these both happens. Depending from the option 1 or 2 ther program asks the user again with an input() operation for the specific input (one word or multiple words).

sword(w1)

In the case of the one word input in the variable ‘w1’ w1 contains as value a string input which will be delivered as input argument to the new function ‘sword()’ (explanation see above). In case of input 2 the

w2=w1.split()

‘split()’ operation will be applied to the object ‘w1’ by the dot operator ‘.’. This operation will take every word separated by a ‘blank’ and generates a list ‘[ … ]’ with the individual words as elements.

 

 

PHILOSOPHY LAB

eJournal: uffmm.org

ISSN 2567-6458, July 13,  2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Changes: July 20.2019 (Rewriting the introduction)

CONTEXT

This Philosophy Lab section of the uffmm science blog is the last extension of the uffmm blog, happening July 2019. It has been provoked by the meta reflections about the AAI engineering approach.

SCOPE OF SECTION

This section deals with  the following topics:

  1. How can we talk about science including the scientist (and engineer!) as the main actors? In a certain sense one can say that science is mainly a specific way how to communicate and to verify the communication content. This presupposes that there is something called knowledge located in the heads of the actors.
  2. The presupposed knowledge usually is targeting different scopes encoded in different languages. The language enables or delimits meaning and meaning objects can either enable or  delimit a certain language. As part of the society and as exemplars of the homo sapiens species scientists participate in the main behavior tendencies to assimilate majority behavior and majority meanings. This can reduce the realm of knowledge in many ways. Biological life in general is the opposite to physical entropy by generating auotopoietically during the course of time  more and more complexity. This is due to a built-in creativity and the freedom to select. Thus life is always oscillating between conformity and experiment.
  3. The survival of modern societies depends highly on the ability   to communicate with maximal sharing of experience by exploring fast and extensively possible state spaces with their pros and cons. Knowledge must be round the clock visible to all, computablemodular, constructive, in the format of interactive games with transparent rules. Machines should be re-formatted as primarily helping humans, not otherwise around.
  4. To enable such new open and dynamic knowledge spaces one has to redefine computing machines extending the Turing machine (TM) concept to a  world machine (WM) concept which offers several new services for social groups, whole cities or countries. In the future there is no distinction between man and machine because there is a complete symbiotic unification because  the machines have become an integral part of a personality, the extension of the body in some new way; probably  far beyond the cyborg paradigm.
  5. The basic creativity and freedom of biological life has been further developed in a fundamental all embracing spirituality of life in the universe which is targeting a re-creation of the whole universe by using the universe for the universe.

 

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.

 

 

 

 

 

 

 

 

AAI THEORY V2 – DEFINING THE CONTEXT

eJournal: uffmm.org,
ISSN 2567-6458, 24.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the second chapter where you have to define the context of the problem, which should be analyzed.

DEFINING THE CONTEXT OF PROBLEM P

  1. A defined problem P identifies at least one property associated with  a configuration which has a lower level x than a value y inferred by an accepted standard E.
  2. The property P is always part of some environment ENV which interacts with the problem P.
  3. To approach an improved configuration S measured by  some standard E starting with a  problem P one  needs a process characterized by a set of necessary states Q which are connected by necessary changes X.
  4. Such a process can be described by an actor story AS.
  5. All properties which belong to the whole actor story and therefore have to be satisfied by every state q of the actor story  are called  non-functional process requirements (NFPRs). If required properties are are associate with only one state but for the whole state, then these requirements are called non-functional state requirements (NFSRs).
  6. An actor story can include many different sequences, where every sequence is called a path PTH.  A finite set of paths can represent a task T which has to be fulfilled. Within the environment of the defined problem P it mus be possible to identify at least one task T to be realized from some start state to some goal state. The realization of a task T is assumed to be ‘driven’ by input-output-systems which are called actors A.
  7. Additionally it mus be possible to identify at least one executing actor A_exec doing a  task and at least one actor assisting A_ass the executing actor to fulfill the task.
  8. A state q represents all needed actors as part of the associated environment ENV. Therefore a  state q can be analyzed as a network of elements interacting with each other. But this is only one possible structure for an analysis besides others.
  9. For the   analysis of a possible solution one can distinguish at least two overall strategies:
    1. Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
    2. Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.

EXAMPLE

The mayor of a city has identified as a  problem the relationship between the actual population number POP,    the amount of actual available  living space LSP0, and the  amount of recommended living space LSPr by some standard E.  The population of his city is steadily interacting with populations in the environment: citizens are moving into the environment MIGR- and citizens from the environment are arriving MIGR+. The population,  the city as well as the environment can be characterized by a set of parameters <P1, …, Pn> called a configuration which represents a certain state q at a certain point of time t. To convert the actual configuration called a start state q0 to a new configuration S called a goal state q+ with better values requires the application of a defined set of changes Xs which change the start state q0 stepwise into a sequence of states qi which finally will end up in the desired goal state q+. A description of all these states necessary for the conversion of the start state q0 into the goal state q+ is called here an actor story AS. Because a democratic elected  mayor of the city wants to be ‘liked’ by his citizens he will require that this conversion process should end up in a goal state which is ‘not harmful’ for his citizens, which should support a ‘secure’ and ‘safety’ environment, ‘good transportation’ and things like that. This illustrates non-functional state requirements (NFSRs). Because the mayor wants also not to much trouble during the conversion process he will also require some limits for the whole conversion process, this is for the whole actor story. This illustrates non-functional process requirements (NFPRs). To realize the intended conversion process the mayor needs several executing actors which are doing the job and several other assistive actors helping the executing actors. To be able to use the available time and resources ‘effectively’ the executing actors need defined tasks which have to be realized to come from one state to the next. Often there are more than one sequences of states possible either alternatively or in parallel. A certain state at a certain point of time t can be viewed as a network where all participating actors are in many ways connected with each other, interacting in several ways and thereby influencing each other. This realizes different kinds of communications with different kinds of contents and allows the exchange of material and can imply the change of the environment. Until today the mayors of cities use as their preferred strategy to realize conversion processes selected small teams of experts doing their job in a top-down manner leaving the citizens more or less untouched, at least without a serious participation in the whole process. From now on it is possible and desirable to twist the strategy from top-down to bottom up. This implies that the selected experts enable a broad communication with potentially all citizens which are touched by a conversion and including  the knowledge, experience, skills, visions etc. of these citizens  by applying new methods possible in the new digital age.

 

 

ADVANCED AAI-THEORY

eJournal: uffmm.org,
ISSN 2567-6458, 21.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Here You can find a new version of this post

CONTEXT

The last official update of the AAI theory dates back to Oct-2, 2018. Since that time many new thoughts have been detected and have been configured for further extensions and improvements. Here I try to give an overview of all the actual known aspects of the expanded AAI theory as a possible guide for the further elaborations of the main text.

CLARIFYING THE PROBLEM

  1. Generally it is assumed that the AAI theory is embedded in a general systems engineering approach starting with the clarification of a problem.
  2. Two cases will be distinguished:
    1. A stakeholder is associated with a certain domain of affairs with some prominent aspect/ parameter P and the stakeholder wants to clarify whether P poses some ‘problem’ in this domain. This presupposes some explained ‘expectations’ E how it should be and some ‘findings’ x pointing to the fact that P is ‘sufficiently different’ from some y>x. If the stakeholder judges that this difference is ‘important’, than P matching x will be classified as a problem, which will be documented in a ‘problem document D_p’. One interpret this this analysis as a ‘measurement M’ written as M(P,E) = x and x<y.
    2. Given a problem document D_p a stakeholder invites some experts to find a ‘solution’ which transfers the old ‘problem P’ into a ‘configuration S’ which at least should ‘minimize the problem P’. Thus there must exist some ‘measurements’ of the given problem P with regard to certain ‘expectations E’ functioning as a ‘norm’ as M(P,E)=x and some measurements of the new configuration S with regard to the same expectations E as M(S,E)=y and a metric which allows the judgment y > x.
  3. From this follows that already in the beginning of the analysis of a possible solution one has to refer to some measurement process M, otherwise there exists no problem P.

CHECK OF FRAMING CONDITIONS

  1. The definition of a problem P presupposes a domain of affairs which has to be characterized in at least two respects:
    1. A minimal description of an environment ENV of the problem P and
    2. a list of so-called non-functional requirements (NFRs).
  2. Within the environment it mus be possible to identify at least one task T to be realized from some start state to some end state.
  3. Additionally it mus be possible to identify at least one executing actor A_exec doing this task and at least one actor assisting A_ass the executing actor to fulfill the task.
  4. For the  following analysis of a possible solution one can distinguish two strategies:
    1. Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
    2. Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.

ACTOR STORY (AS)

  1. The goal of an actor story (AS) is a full specification of all identified necessary tasks T which lead from a start state q* to a goal state q+, including all possible and necessary changes between the different states M.
  2. A state is here considered as a finite set of facts (F) which are structured as an expression from some language L distinguishing names of objects (LIKE ‘d1’, ‘u1’, …) as well as properties of objects (like ‘being open’, ‘being green’, …) or relations between objects (like ‘the user stands before the door’). There can also e a ‘negation’ like ‘the door is not open’. Thus a collection of facts like ‘There is a door D1’ and ‘The door D1 is open’ can represent a state.
  3. Changes from one state q to another successor state q’ are described by the object whose action deletes previous facts or creates new facts.
  4. In this approach at least three different modes of an actor story will be distinguished:
    1. A pictorial mode generating a Pictorial Actor Story (PAS). In a pictorial mode the drawings represent the main objects with their properties and relations in an explicit visual way (like a Comic Strip).
    2. A textual mode generating a Textual Actor Story (TAS): In a textual mode a text in some everyday language (e.g. in English) describes the states and changes in plain English. Because in the case of a written text the meaning of the symbols is hidden in the heads of the writers it can be of help to parallelize the written text with the pictorial mode.
    3. A mathematical mode generating a Mathematical Actor Story (MAS): n the mathematical mode the pictorial and the textual modes are translated into sets of formal expressions forming a graph whose nodes are sets of facts and whose edges are labeled with change-expressions.

TASK INDUCED ACTOR-REQUIREMENTS (TAR)

If an actor story AS is completed, then one can infer from this story all the requirements which are directed at the executing as well as the assistive actors of the story. These requirements are targeting the needed input- as well as output-behavior of the actors from a 3rd person point of view (e.g. what kinds of perception are required, what kinds of motor reactions, etc.).

ACTOR INDUCED ACTOR-REQUIREMENTS (AAR)

Depending from the kinds of actors planned for the real work (biological systems, animals or humans; machines, different kinds of robots), one has to analyze the required internal structures of the actors needed to enable the required perceptions and responses. This has to be done in a 1st person point of view.

ACTOR MODELS (AMs)

Based on the AARs one has to construct explicit actor models which are fulfilling the requirements.

USABILITY TESTING (UTST)

Using the actor as a ‘norm’ for the measurement one has to organized an ‘usability test’ in he way, that a real executing test actor having the required profiles has to use a real assisting actor in the context of the specified actor story. Place in a start state of the actor story the executing test actor has to show that and how he will reach the defined goal state of the actor story. For this he has to use a real assistive actor which usually is an experimental device (a mock-up), which allows the test of the story.

Because an executive actor is usually a ‘learning actor’ one has to repeat the usability test n-times to see, whether the learning curve approaches a minimum. Additionally to such objective tests one should also organize an interview to get some judgments about the subjective states of the test persons.

SIMULATION

With an increasing complexity of an actor story AS it becomes important to built a simulator (SIM) which can take as input the start state of the actor story together with all possible changes. Then the simulator can compute — beginning with the start state — all possible successor states. In the interactive mode participating actors will explicitly be asked to interact with the simulator.

Having a simulator one can use a simulator as part of an usability test to mimic the behavior of an assistive actor. This mode can also be used for training new executive actors.

A TOP-DOWN ACTOR STORY

The elaboration of an actor story will usually be realized in a top-down style: some AAI experts will develop the actor story based on their experience and will only ask for some test persons if they have elaborated everything so far that they can define some tests.

A BOTTOM-UP ACTOR STORY

In a bottom-up style the AAI experts collaborate from the beginning with a group of common users from the application domain. To do this they will (i) extract the knowledge which is distributed in the different users, then (ii) they will start some modeling from these different facts to (iii) enable some basic simulations. This simple simulation (iv) will be enhanced to an interactive simulation which allows serious gaming either (iv.a) to test the model or to enable the users (iv.b) to learn the space of possible states. The test case will (v) generate some data which can be used to evaluate the model with regard to pre-defined goals. Depending from these findings (vi) one can try to improve the model further.

THE COGNITIVE SPACE

To be able to construct executive as well as assistive actors which are close to the way how human persons do communicate one has to set up actor models which are as close as possible with the human style of cognition. This requires the analysis of phenomenal experience as well as the psychological behavior as well as the analysis of a needed neuron-physiological structures.

STATE DYNAMICS

To model in an actor story the possible changes from one given state to another one (or to many successor states) one needs eventually besides explicit deterministic changes different kinds of random rules together with adaptive ones or decision-based behavior depending from a whole network of changing parameters.

LIBRARIES AS ACTORS. WHAT ABOUT THE CITIZENS?

eJournal: uffmm.org, ISSN 2567-6458, 19.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this blog a new approach to the old topic of ‘Human-Machine Interaction (HMI)’ is developed turning the old Human-Machine dyad into the many-to-many relation of ‘Actor-Actor Interaction (AAI)’. And, moreover, in this new AAI approach the classical ‘top-down’ approach of engineering is expanded with a truly ‘bottom-up’ approach locating the center of development in the distributed knowledge of a population of users assisted by the AAI experts.

PROBLEM

From this perspective it is interesting to see how on an international level the citizens of a community/ city are not at the center of research, but again the city and its substructures – here public libraries – are called ‘actors’ while the citizens as such are only an anonymous matter of driving these structures to serve the international ‘buzz word’ of a ‘smart city’ empowered by the ‘Internet of Things (IoT)’.

This perspective is published in a paper from Shannon Mersand et al. (2019) which reviews all the main papers available focusing on the role of public libraries in cities. It seems – I could not check by myself the search space — that the paper gives a good overview of this topic in 48 cited papers.

The main idea underlined by the authors is that public libraries are already so-called ‘anchor institutions’ in a community which either already include or could be extended as “spaces for innovation, collaboration and hands on learning that are open to adults and younger children as well”. (p.3312) Or, another formulation “that libraries are consciously working to become a third space; a place for learning in multiple domains and that provides resources in the form of both materials and active learning opportunities”. (p.3312)

The paper is rich on details but for the context of the AAI paradigm I am interested only on the general perspective how the roles of the actors are described which are identified as responsible for the process of problem solving.

The in-official problem of cities is how to organize the city to respond to the needs of its citizens. There are some ‘official institutions’ which ‘officially’ have to fulfill this job. In democratic societies these institutions are ‘elected’. Ideally these official institutions are the experts which try to solve the problem for the citizens, which are the main stakeholder! To help in this job of organizing the ‘best fitting city-layout’ there exists usually at any point of time a bunch of infrastructures. The modern ‘Internet of Things (IoT)’ is only one of many possible infrastructures.

To proceed in doing the job of organizing the ‘best fitting city-layout’ there are generally two main strategies: ‘top-down’ as usual in most cities or ‘bottom-‘ in nearly no cities.

In the top-down approach the experts organize the processes of the cities more or less on their own. They do not really include the expertise of their citizens, not their knowledge, not their desires and visions. The infrastructures are provided from a birds perspective and an abstract systems thinking.

The case of the public libraries is matching this top-down paradigm. At the end of their paper the authors classify public libraries not only as some ‘infrastructure’ but “… recognize the potential of public libraries … and to consider them as a key actor in the governance of the smart community”. (p.3312) The term ‘actor’ is very strong. This turns an institution into an actor with some autonomy of deciding what to do. The users of the library, the citizens, the primary stakeholder of the city, are not seen as actors, they are – here – the material to ‘feed’ – to use a picture — the actor library which in turn has to serve the governance of the ‘smart community’.

DISCUSSION

Yes, this comment can be understood as a bit ‘harsh’ because one can read the text of the authors a bit different in the sense that the citizens are not only some matter to ‘feed’ the actor library but to see the public library as an ‘environment’ for the citizens which find in the libraries many possibilities to learn and empower themselves. In this different reading the citizens are clearly seen as actors too.

This different reading is possible, but within an overall ‘top-down’ approach the citizens as actors are not really included as actors but only as passive receivers of infrastructure offers; in a top-down approach the main focus are the infrastructures, and from all the infrastructures the ‘smart’ structures are most prominent, the internet of things.

If one remembers two previous papers of Mila Gascó (2016) and Mila Gascó-Hernandez (2018) then this is a bit astonishing because in these earlier papers she has analyzed that the ‘failure’ of the smart technology strategy in Barcelona was due to the fact that the city government (the experts in our framework) did not include sufficiently enough the citizens as actors!

From the point of view of the AAI paradigm this ‘hiding of the citizens as main actors’ is only due to the inadequate methodology of a top-down approach where a truly bottom-up approach is needed.

In the Oct-2, 2018 version of the AAI theory the bottom-up approach is not yet included. It has been worked out in the context of the new research project about ‘City Planning and eGaming‘  which in turn has been inspired by Mila Gascó-Hernandez!

REFERENCES

  • S.Mersand, M. Gasco-Hernandez, H. Udoh, and J.R. Gil-Garcia. “Public libraries as anchor institutions in smart communities: Current practices and future development”, Proceedings of the 52nd Hawaii International Conference on System Sciences, pages 3305 – 3314, 2019. URL https: //hdl.handle.net/10125/59766 .

  • Mila Gascó, “What makes a city smart? lessons from Barcelona”. 2016 49th Hawaii International Conference on System Sciences (HICSS), pages 2983–2989, Jan 2016. D O I : 10.1109/HICSS.2016.373.

  • Mila Gascó-Hernandez, “Building a smart city: Lessons from Barcelona.”, Commun. ACM, 61(4):50–57, March 2018. ISSN 0001-0782. D O I : 10.1145/3117800. URL http://doi.acm.org/10.1145/3117800 .

ACTOR-ACTOR INTERACTION [AAI] WITHIN A SYSTEMS ENGINEERING PROCESS (SEP). An Actor Centered Approach to Problem Solving

eJournal: uffmm.org, ISSN 2567-6458
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

ATTENTION: The actual Version  you will find HERE.

Draft version 22.June 2018

Update 26.June 2018 (Chapter AS-AM Summary)

Update 4.July 2018 (Chapter 4 Actor Model; improving the terminology of environments with actors, actors as input-output systems, basic and real interface, a first typology of input-output systems…)

Update 17.July 2018 (Preface, Introduction new)

Update 19.July 2018 (Introduction final paragraph!, new chapters!)

Update 20.July 2018 (Disentanglement of chapter ‘Simulation & Verification’ into two independent chapters; corrections in the chapter ‘Introduction’; corrections in chapter ‘AAI Analysis’; extracting ‘Simulation’ from chapter ‘Actor Story’ to new chapter ‘Simulation’; New chapter ‘Simulation’; Rewriting of chapter ‘Looking Forward’)

Update 22.July 2018 (Rewriting the beginning of the chapter ‘Actor Story (AS)’, not completed; converting chapter ‘AS+AM Summary’ to ‘AS and AM Philosophy’, not completed)

Update 23.July 2018 (Attaching a new chapter with a Case Study illustrating an actor story (AS). This case study is still unfinished. It is a case study of  a real project!)

Update 7.August 2018 (Modifying chapter Actor Story, the introduction)

Update 8.August 2018 (Modifying chapter  AS as Text, Comic, Graph; especially section about the textual mode and the pictorial mode; first sketch for a mapping from the textual mode into the pictorial mode)

Update 9.August 2018 (Modification of the section ‘Mathematical Actor Story (MAS) in chapter 4).

Update 11.August 2018 (Improving chapter 3 ‘Actor Story; nearly complete rewriting of chapter 4 ‘AS as text, comic, graph’.)

Update 12.August 2018 (Minor corrections in the chapters 3+4)

Update 13.August 2018 (I am still catched by the chapters 3+4. In chapter  the cognitive structure of the actors has been further enhanced; in chapter 4 a complete example of a mathematical actor story could now been attached.)

Update 14.August 2018 (minor corrections to chapter 4 + 5; change-statements define for each state individual combinatorial spaces (a little bit like a quantum state); whether and how these spaces will be concretized/ realized depends completely from the participating actors)

Update 15.August 2018 (Canceled the appendix with the case study stub and replaced it with an overview for  a supporting software tool which is needed for the real usage of this theory. At the moment it is open who will write the software.)

Update 2.October 2018 (Configuring the whole book now with 3 parts: I. Theory, II. Application, III. Software. Gerd has his focus on part I, Zeynep will focus on part II and ‘somebody’ will focus on part III (in the worst case we will — nevertheless — have a minimal version :-)). For a first quick overview about everything read the ‘Preface’ and the ‘Introduction’.

Update 4.November 2018 (Rewriting the Introduction (and some minor corrections in the Preface). The idea of the rewriting was to address all the topics which will be discussed in the book and pointing out to the logical connections between them. This induces some wrong links in the following chapters, which are not yet updated. Some chapters are yet completely missing. But to improve the clearness of the focus and the logical inter-dependencies helps to elaborate the missing texts a lot. Another change is the wording of the title. Until now it is difficult to find a title which is exactly matching the content. The new proposal shows the focus ‘AAI’ but lists the keywords of the main topics within AAA analysis because these topics are usually not necessarily associated with AAI.)

ACTOR-ACTOR INTERACTION [AAI]. An Actor Centered Approach to Problem Solving. Combining Engineering and Philosophy

by

GERD DOEBEN-HENISCH in cooperation with  LOUWRENCE ERASMUS, ZEYNEP TUNCER

LATEST  VERSION AS PDF

BACKGROUND INFORMATION 19.Dec.2018: Application domain ‘Communal Planning and e-Gaming’

BACKGROUND INFORMATION 24.Dec.2018: The AAI-paradigm and Quantum Logic

PRE-VIEW: NEW EXPANDED AAI THEORY 23.January 2019: Outline of the new expanded  AAI Paradigm. Before re-writing the main text with these ideas the new advanced AAI theory will first be tested during the summer 2019 within a lecture with student teams as well as in  several workshops outside the Frankfurt University of Applied Sciences with members of different institutions.

AASE – Actor-Actor Systems Engineering. Theory & Applications. Micro-Edition (Vers.9)

eJournal: uffmm.org, ISSN 2567-6458
13.June  2018
Email: info@uffmm.org
Authors: Gerd Doeben-Henisch, Zeynep Tuncer,  Louwrence Erasmus
Email: doeben@fb2.fra-uas.de
Email: gerd@doeben-henisch.de

PDF

CONTENTS

1 History: From HCI to AAI …
2 Different Views …
3 Philosophy of the AAI-Expert …
4 Problem (Document) …
5 Check for Analysis …
6 AAI-Analysis …
6.1 Actor Story (AS) . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Textual Actor Story (TAS) . . . . . . . . . . . . . . .
6.1.2 Pictorial Actor Story (PAT) . . . . . . . . . . . . . .
6.1.3 Mathematical Actor Story (MAS) . . . . . . . . . . .
6.1.4 Simulated Actor Story (SAS) . . . . . . . . . . . . .
6.1.5 Task Induced Actor Requirements (TAR) . . . . . . .
6.1.6 Actor Induced Actor Requirements (UAR) . . . . . .
6.1.7 Interface-Requirements and Interface-Design . . . .
6.2 Actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Actor and Actor Story . . . . . . . . . . . . . . . . .
6.2.2 Actor Model . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Actor as Input-Output System . . . . . . . . . . . .
6.2.4 Learning Input-Output Systems . . . . . . . . . . . .
6.2.5 General AM . . . . . . . . . . . . . . . . . . . . . .
6.2.6 Sound Functions . . . . . . . . . . . . . . . . . . .
6.2.7 Special AM . . . . . . . . . . . . . . . . . . . . . .
6.2.8 Hypothetical Model of a User – The GOMS Paradigm
6.2.9 Example: An Electronically Locked Door . . . . . . .
6.2.10 A GOMS Model Example . . . . . . . . . . . . . . .
6.2.11 Further Extensions . . . . . . . . . . . . . . . . . .
6.2.12 Design Principles; Interface Design . . . . . . . . .
6.3 Simulation of Actor Models (AMs) within an Actor Story (AS) .
6.4 Assistive Actor-Demonstrator . . . . . . . . . . . . . . . . . .
6.5 Approaching an Optimum Result . . . . .
7 What Comes Next: The Real System
7.1 Logical Design, Implementation, Validation . . . .
7.2 Conceptual Gap In Systems Engineering? . . .
8 The AASE-Paradigm …
References

Abstract

This text is based on the the paper “AAI – Actor-Actor Interaction. A Philosophy of Science View” from 3.Oct.2017 and version 11 of the paper “AAI – Actor-Actor Interaction. An Example Template” and it   transforms these views in the new paradigm ‘Actor- Actor Systems Engineering’ understood as a theory as well as a paradigm for and infinite set of applications. In analogy to the slogan ’Object-Oriented Software Engineering (OO SWE)’ one can understand the new acronym AASE as a systems engineering approach where the actor-actor interactions are the base concepts for the whole engineering process. Furthermore it is a clear intention to view the topic AASE explicitly from the point of view of a theory (as understood in Philosophy of Science) as well as from the point of view of possible applications (as understood in systems engineering). Thus the classical term of Human-Machine Interaction (HMI) or even the older Human-Computer Interaction (HCI) is now embedded within the new AASE approach. The same holds for the fuzzy discipline of Artificial Intelligence (AI) or the subset of AI called Machine Learning (ML). Although the AASE-approach is completely in its beginning one can already see how powerful this new conceptual framework  is.

 

 

INTELLIGENT MACHINES – INTRODUCTION

Scientific Workplace For an Integrated Engineering of the Future
eJournal uffmm.org ISSN 2567-6458 (info@uffmm.org)

by
Gerd Doeben-Henisch
(gerd@doeben-henisch.de)

PDF

OVERVIEW

A short story telling You, (i) how we interface the intelligent machines (IM) part with the actor-actor interaction (AAI) part, (ii) a first working definition of intelligent machines (IM) in this text, and (iii) defining intelligence and how one can this measure.

IM WITHIN AAI

In this blog we see IM not isolated, as a stand alone endeavor, but as embedded in a discipline called actor-actor interaction (AAI)(Comment: For a more detailed description see the AAI-part in this blog). AAI investigates complex tasks and looks how different kinds of actors are interacting in these contexts with technical systems. As far as the participating systems have been technical systems one spoke here of a system interface (SI) as that part of a technical system, which is interacting with the human actor. In the case of biological systems (mostly humans, but it could be animals as well), one spoke of the user interface (UI). In this text we generalize both cases by the general concept of an actor — biological and non-biological –, which has some actor interface (ActI), and this actor interface embraces all properties which are relevant for the interactions of the actor.

For the analysis of the behavior of actors in such task-environments one can distinguish two important concepts: the actor story (AS) describing the context as an observable process, as well as different actor models (AM). Actor models are special extensions of an actor story because an actor model describes the observable behavior of actors as a behavior function (BF) with a set of assumptions about possible internal states of the actors. The assumptions about possible internal states (IS) are either completely arbitrary or empirically motivated.

The embedding of IM within AAI can be realized through the concept of an actor model (UM) and the actor story (AS). Whatever is important for something which is called an intelligent machine application (IMA) can be defined as an actor model within an actor story. This embedding of IM within AAI offers many advantages.

This has to be explained with some more details.

An Intelligent Machine (IM) in an Actor Story

Let us assume that there exists a mathematical-graph representation of an actor story written as AS_{L_{ε}}. Such a graph has nodes which represent situations. Formally these are sets of properties, probably more fine-grained by subsets which represent different kinds of actors embedded in this situation as well as different kinds of non-actors.

Actors can be classified (as introduced above) as either biological actors (BA) or non-biological actors (NBA). Both kinds of actors can — in another reading — be subsumed under the general term of input-output-systems (IO-SYS). An input-output system can be a learning system or non-learning. Another basic property is that of being intelligent or non-intelligent. Being a learning system and being an intelligent system is usually strongly connected, but this must not necessarily be so. Being a learning system can be associated with being non-intelligent and being intelligent can be connected with being non-learning.(cf. Figure 1)

Classification of input-output systems according to learning, intelligence and beeing biological or not biological
Classification of input-output systems according to learning, intelligence and being biological or non-biological

While biological systems are always learning and intelligent, one can find non-biological systems of all types: non-learning and non-intelligent, non-intelligent and learning, non-learning and intelligent, and learning and intelligent.

Learning System

To classify a system as a learning system this requires the general ability to change the behavior of this system in time thus that there exists a time-span (t1,t2) after which the behavior to certain critical stimuli has changed compared to the time before (cf. Shettleworth (1994)). From this requirement it follows, that a learning system is an input-output system with at least one internal state which can change. Thus we have the general assumption:

Def: Learning System (LS)

  1. LS(x) iff
  2. x=<I, O, IS, phi >
  3. phi: I x IS —> IS x O
  4. I := Input
  5. O := Output
  6. IS := Internal statesSome x is a learning system (LS) if it is a structure containing sets for input (I), Output (O), as well as internal states (IS). These sets are operated by a behavior function \phi which maps inputs and actual internal states to output as well as back to internal states. The set of possible learning functions is infinite.

    Intelligent System

    The term ‘intelligent’ and ‘intelligence’ is until now not standardized. This means that everybody is using it at little bit arbitrarily.

    In this text we take the basic idea of a scientific usage of the term ‘intelligence’ from experimental psychology, which has developed clearly defined operational concepts since the end of the 19. Century which have been proved as quite stable in their empirical applications.\footnote{For an introduction in the field of psychological intelligence concepts see HilgardEt:1979, Rost:2009, Rost:2013

    The central idea of the psychological concept of the usage of the term ‘intelligence’ is to associate the usage of the term ‘intelligence’ with observable behavior of those actors, which shall be classified according defined methods of measurement.

    In the case of experimental psychology the actors have been biological systems, mainly humans, in the first years of the research school children of certain ages. Because nobody did know what ‘intelligence’ means ‘as such’ one agreed to accept the observable behavior of children in certain task environments as ‘manifestations’ of a ‘presupposed unknown intelligence’. Thus the ability of children to solve defined tasks in a certain defined manner became a norm for what is called ‘intelligence’. Solving the tasks in a certain time with less than a certain amount of errors was used as a ‘baseline’ and all behavior deviating from the baseline was ‘better’ or ‘poorer’.

    Thus the ‘content’ of the ‘meaning’ of the term ‘intelligence’ has been delegated to historical patterns of behavior which were common in a certain time-span in a certain geographical and cultural region.

    While these behavior patterns can change during the course of time the general method of measurement is invariant.

    In the time since then experimental psychology has modified and elaborated this first concept in some directions.

    One direction is the modification of the kind of tasks which are used for the tests. With regard to the cultural context one has modified the content, thereby looking to find such kinds of task which seem to be ‘invariant’ with regard to the presupposed intelligence factor. This is an ongoing process.

    The other direction is the focus on the actors as such. Because biological systems like humans change the development of their intelligence with age one has tried to find out ‘typical tasks for every age’. This too is an ongoing process.

    This history of experimental psychology gives very interesting examples how one can approach the problem of the usage and the measurement of some X which we call ‘intelligence’.

    In the context of an AAI-approach we have not only biological systems, but also non-biological systems. Thus most of the elaborated parameters of psychology for human actors are not general enough.

    One possible strategy to generalize the intelligence-paradigm of experimental psychology could be to ‘free’ the selection of task sets from the narrow human cultures of the past and require only ‘clearly defined task sets with defined interfaces and defined contexts’. All these tasks sets can be arranged either in one super-set or in a parametrized field of sets. The sum of all these sets defines then a space of possible behavior and associated with this a space of possible measurable intelligence.

    A task has then to be given as an actor story according to the AAI-paradigm. Such a specified actor story allows the formal definition of a complexity measure which can be used to measure the ‘amount of intelligence necessary to solve such a task’.

    With such a more general and extendable approach to the measurement of observable intelligence one can compare all kinds of systems with each other. With such an approach one can further show objectively, where biological and non-biological systems differ, where they are similar, and to which extend they differ.

    Measuring Intelligence by Actor Stories

    Presupposing actor stories (AS) (ideally formalized as mathematical graphs) on can define a first operational general measurement of intelligence.

    Def: Task-Intelligence of a task τ (TInt(τ))

        1. Every defined task τ represents a graph g with one shortest path pmin(τ)= π_{min} from a start node to a goal node.
        2. Every such shortest path π_{min} has a certain number of nodes path-nodes(π_{min})=ν.
        3. The number of solved nodes (ν_{solved}) can become related against the total number of nodes ν as ν_{solved}/ν. We take TInt(τ)= ν_{solved}/ν. It follows that TInt(τ) is between 0 and 1: 0 ≤ TInt(τ)≤ 1.
        4. To every task is attached a maximal duration Δ_{max}; all nodes which are solved within this maximal duration time Δ_{max} are declared as ‘solved’, all the others as ‘un-solved’.

    The usual case will require more than one task to be realized. Thus we introduce the concept of a task field (TF).

    Def: Task-Field of type x (TF_{x})
    Def: Task-Field Intelligence (TFInt)

    A task-field TF of type x includes a finite set of individual tasks like TF_{x} = { τ{x.1}, τ{x.2}, … , τ{x.n} } with n ≥ 2. The sum of all individual task intelligence values TInt(τ{x.i}) has to be normalized to 1, i.e. (TInt(τ{x.1}) + TInt(τ{x.2}) + … + TInt(τ{x.n}))/ n (with 0 in the nominator not allowed). Thus the value of the intelligence of a task field of type x TFInt(TF_{x}) is again in the domain of [0,1].

    Because the different tasks in a task field TF can be of different difficulty it should be possible to introduce some weighting for the individual task intelligence values. This should not change the general mechanism.

    Def: Combined Task-Fields (TF)

    In face of the huge variety of possible task fields in this world it can make sens to introduce more general layers by grouping task fields of different types together to larger combined fields, like TF_{x,…,z} = TF_{x} ∪ TF_{y} ∪ … ∪ TF_{z}. The task field intelligence TFInt of such combined task fields would be computed as before.

    Def: Omega Task-Field at time t (TF_{ω}(t))

    The most comprehensive assembly of such combinations shall here be called the Omega-Task-Field at time t TF_{ω}(t). This indicates the known maximum of intelligence measurements at that point of time.

    Measurement Comments

    With these assumptions the term intelligence will be restricted to clearly defined domains either to an individual task, to a task-field of type x, or to some grouped task-fields or being related to the actual omega task-field. In every such domain the intelligence value is in the realm of [0,1] or written as some value between 0 or 100%.

    Independent of the type of an actor — biological or not — one can measure the intelligence of such an actor with the same domains of defined tasks. As a result one can easily compare all known actors with regard to such defined task domains.

    Because the acting actors can be quite different by their input-output capabilities it follows that every actor has to organize some interface which enables him to use the defined task. There are no special restrictions to the format of such an interface, but there is one requirement which has to be observed strictly: the interface as such is not allowed to do any kind of computation beyond providing only the necessary input from the task domain or to provide the necessary output to the domain. Only then are the different tests able to reveal some difference between the different actors.

    If the tests show differences between certain types of actors with regard to a certain task or a task-field then this is a chance to develop smart assistive interfaces which can help the actor in question to overcome his weakness compared to the other type of actor. Thus this kind of measuring intelligence can be a strong supporter for a better world in the future.

    Another consequence of the differing intelligence values can be to look to the inner structure of an actor with weaker values and asking how one could improve his capabilities. This can be done e.g. by different kinds of trainings or by improving his system structures.