Category Archives: expert

COMMON SCIENCE as Sustainable Applied Empirical Theory, besides ENGINEERING, in a SOCIETY

eJournal: uffmm.org
ISSN 2567-6458, 19.Juni 2022 – 30.December 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of the Philosophy of Science theme within the the uffmm.org blog.

This is work in progress:

  1. The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
  2. Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
  3. I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.

INTRODUCTION

In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].

With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.

The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.

CONTENT

  1. Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
  2. Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
  3. True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.
  4. Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
  5. Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
  6. Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
  7. !!! From here all the following chapters have to be re-written !!!
  8. THE LOGIC OF EVERYDAY THINKING. Lets try an Example (Will probably be re-written too)
  9. Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
  10. … more re-writing will probably happen …
  11. Everyday Language: German Example
  12. Everyday Language: English
  13. Natural Logic
  14. Predicate Logic
  15. True Statements
  16. Formal Logic Inference: Preserving Truth
  17. Ordinary Language Inference: Preserving and Creating Truth
  18. Hidden Ontologies: Cognitively Real and Empirically Real
  19. AN INFERENCE IS NOT AUTOMATICALLY A FORECAST
  20. EMPIRICAL THEORY
  21. Side Trip to Wikipedia
  22. SUSTAINABLE EMPIRICAL THEORY
  23. CITIZEN SCIENCE 2.0
  24. … ???

COMMENTS

wkp-en := Englisch Wikipedia

/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */

[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂

[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]

[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.

[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.

[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.

[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.

[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).

[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.

[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science

Citation = “Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1][2]

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

Citation = “New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems.[27][28] Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions,[29] government agencies, and companies.[30][31] The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritizing the ethical and moral development of commercial productsarmamentshealth carepublic infrastructure, and environmental protection.”

[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science

[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.

Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”

[4] Scientific theory, see: wkp-en: https://en.wikipedia.org/wiki/Scientific_theory

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism

[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method

Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]

and

Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]

[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/

[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic

[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)

[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory

[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique

[10] = [5]

[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …

[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.

[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra

[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].

[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.

[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.

[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.

[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022

[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)

[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069

[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability

[25] Sustainable Development in wkp-en: https://en.wikipedia.org/wiki/Sustainable_development

[26] Marope, P.T.M; Chakroun, B.; Holmes, K.P. (2015). Unleashing the Potential: Transforming Technical and Vocational Education and Training (PDF). UNESCO. pp. 9, 23, 25–26. ISBN978-92-3-100091-1.

[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4

[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London

[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342

[30] Döben-Henisch, G.,  Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263

[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.

[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386

[]  Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) in wkp-en, UTL: https://en.wikipedia.org/wiki/Intergovernmental_Science-Policy_Platform_on_Biodiversity_and_Ecosystem_Services

[] IPBES (2019): Global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondizio, J. Settele, S. Díaz, and H. T. Ngo (editors). IPBES secretariat, Bonn, Germany. 1148 pages. https://doi.org/10.5281/zenodo.3831673

[] Michaelis, L. & Lorek, S. (2004). “Consumption and the Environment in Europe: Trends and Futures.” Danish Environmental Protection Agency. Environmental Project No. 904.

[] Pezzey, John C. V.; Michael A., Toman (2002). “The Economics of Sustainability: A Review of Journal Articles” (PDF). . Archived from the original (PDF) on 8 April 2014. Retrieved 8 April 2014.

[] World Business Council for Sustainable Development (WBCSD)  in wkp-en: https://en.wikipedia.org/wiki/World_Business_Council_for_Sustainable_Development

[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club

[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)

[] UN. Secretary-GeneralWorld Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )

/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */

[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).

[] Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior”, in Language, 35, No. 1 (1959), 26-58.(Online: https://chomsky.info/1967____/, accessed: July 21, 2022)

[] Churchman, C. West (December 1967). “Wicked Problems”Management Science. 14 (4): B-141–B-146. doi:10.1287/mnsc.14.4.B141.

[-] Yen-Chia Hsu, Illah Nourbakhsh, “When Human-Computer Interaction Meets Community Citizen Science“,Communications of the ACM, February 2020, Vol. 63 No. 2, Pages 31-34, 10.1145/3376892, https://cacm.acm.org/magazines/2020/2/242344-when-human-computer-interaction-meets-community-citizen-science/fulltext

[] Yen-Chia Hsu, Ting-Hao ‘Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, Alessandro Bozzon, Empowering local communities using artificial intelligence, DOI:https://doi.org/10.1016/j.patter.2022.100449, CellPress, Patterns, VOLUME 3, ISSUE 3, 100449, MARCH 11, 2022

[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8

[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.

[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4

[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341

[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]

[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973

[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2

[] Gudwin, R.R. (2002), Semiotic Synthesis and Semionic Networks, S.E.E.D. Journal (Semiotics, Energy, Evolution, Development), Volume 2, No.2, pp.55-83.

[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings

[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.

[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).

[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)

[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)

[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)

[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.

[] Markus Luczak-Roesch, Kieron O’Hara, Ramine Tinati, Nigel Shadbolt, Socio-technical Computation, CSCW’15 Companion, March 14–18, 2015, Vancouver, BC, Canada, ACM 978-1-4503-2946-0/15/03, http://dx.doi.org/10.1145/2685553.2698991

[] Marcus, G.F., et al. Overregularization in language acquisition. Monographs of the Society for Research in Child Development 57 (1998).

[] Gary Marcus and Ernest Davis, Rebooting AI, Published by Pantheon,
Sep 10, 2019, 288 Pages

[] Gary Marcus, Deep Learning Is Hitting a Wall. What would it take for artificial intelligence to make real progress, March 10, 2022, URL: https://nautil.us/deep-learning-is-hitting-a-wall-14467/ (accessed: July 20, 2022)

[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017

[]  Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */

[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers 37, 335-425 (1993).

[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530

[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)

[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)

[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[] Rittel, Horst W.J.; Webber, Melvin M. (1973). “Dilemmas in a General Theory of Planning” (PDF). Policy Sciences. 4 (2): 155–169. doi:10.1007/bf01405730S2CID 18634229. Archived from the original (PDF) on 30 September 2007. [Reprinted in Cross, N., ed. (1984). Developments in Design Methodology. Chichester, England: John Wiley & Sons. pp. 135–144.]

[] Ritchey, Tom (2013) [2005]. “Wicked Problems: Modelling Social Messes with Morphological Analysis”Acta Morphologica Generalis2 (1). ISSN 2001-2241. Retrieved 7 October 2017.

[] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th US ed., 2021, URL: http://aima.cs.berkeley.edu/index.html (accessed: July 20, 2022)

[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.

[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘

[] Tonkinwise, Cameron (4 April 2015). “Design for Transitions – from and to what?”Academia.edu. Retrieved 9 November 2017.

[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).

[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl,
Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828

[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681

[-] Sarah West, Rachel Pateman, 2017, “How could citizen science support the Sustainable Development Goals?“, SEI Stockholm Environment Institut , 2017, see: https://mediamanager.sei.org/documents/Publications/SEI-2017-PB-citizen-science-sdgs.pdf

[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543

[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422

[] Boder, A. (2006), “Collective intelligence: a keystone in knowledge management”, Journal of Knowledge Management, Vol. 10 No. 1, pp. 81-93. https://doi.org/10.1108/13673270610650120

[] Wikipedia, ‘Weak and strong sustainability’, https://en.wikipedia.org/wiki/Weak_and_strong_sustainability (accessed: July 21, 2022)

[] Florence Maraninchi, Let us Not Put All Our Eggs in One Basket. Towards new research directions in computer Science, CACM Communications of the ACM, September 2022, Vol.65, No.9, pp.35-37, https://dl.acm.org/doi/10.1145/3528088

[] AYA H. KIMURA and ABBY KINCHY, “Citizen Science: Probing the Virtues and Contexts of Participatory Research”, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.

[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a

[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007

[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006

[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011

[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105

[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0

[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691

[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094

[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence

[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285

[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147

[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004

[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663

[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14

[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568

[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273

[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3

[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5

[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80

[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15
DOI 10.1007/s40309-013-0015-4

[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30

[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032

[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427

[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331

[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x

[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77

[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53

[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569

[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology

[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313

[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages

[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#

Comment by Gerd Doeben-Henisch:

[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.

[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821

[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670

[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000

[] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum, (2019) Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
, Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s): https://arxiv.org/pdf/1904.06387.pdf

Abstract: Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1
You can read in the abstract:
“A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce
a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap-
proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined
with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench-
mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX
is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”

[] Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei, (2017), Deep reinforcement learning from human preferences, https://arxiv.org/abs/1706.03741

In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

[] Melanie Mitchell,(2021), Abstraction and Analogy-Making in Artificial
Intelligence
, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing
challenge tasks and evaluation measures in order to make quantifiable and generalizable progress

[] Melanie Mitchell, (2021), Why AI is Harder Than We Think, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”

[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)

In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”

[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600

In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets,
AI FactSheets, and Model Cards have taken a mainly descriptive
approach, providing various details about the system components.
While the above information is essential for product developers
and external experts to assess whether the ML system meets their
requirements, other stakeholders might find it less actionable. In
particular, ML engineers need guidance on how to mitigate po-
tential shortcomings in order to fix bugs or improve the system’s
performance. We propose a documentation artifact that aims to
provide such guidance in a prescriptive way. Our proposal, called
Method Cards, aims to increase the transparency and reproducibil-
ity of ML systems by allowing stakeholders to reproduce the models,
understand the rationale behind their designs, and introduce adap-
tations in an informed way. We showcase our proposal with an
example in small object detection, and demonstrate how Method
Cards can communicate key considerations that help increase the
transparency and reproducibility of the detection model. We fur-
ther highlight avenues for improving the user experience of ML
engineers based on Method Cards.”

[] John H. Miller, (2022),  Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages
Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429

In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”

[] Stefani A. Crabtree, Global Environmental Change, https://doi.org/10.1016/j.gloenvcha.2022.102597

In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment
to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”



OKSIMO APPLICATIONS – Simple Examples – Citizens of a County

eJournal: uffmm.org ISSN 2567-6458

27.March 2022 – 27.March 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Oksimo Application theme which is part of the uffmm blog.

PREFACE

This post shows a simple simulation example with the beta-version of the new Version 2 of the oksimo programming environment. This example shall illustrate the concept of an ‘Everyday Empirical Theory‘ as described in this blog 11 days before. It is intentionally as ‘simple as possible’. Probably some more examples will be shown.

FROM THEORY TO AN APPLICATION

To apply a theory concept in an everyday world there are many formats possible. In this text it will be shown how such an application would look like if one is applying the oksimo programming environment. Until now there exists only a German Blog (oksimo.org) describing the oksimo paradigm a little bit. But the examples there are written with oksimo version 1, which didn’t allow to use math. In version 2 this is possible, accompanied by some visual graph features.

Everyday Experts – Basic Ideas

This figure shows a simple outline of the basic assumptions of the oksimo programming environment constituting the oksimo paradigm: (i) Every human person is assumed to be a ‘natural expert’ being member of a bigger population which shares the same ‘everyday language’ including basic math. (ii) An actor is embedded in some empirical environment including the own body and other human actors. (iii) Human actors are capable of elaborating as inner states different kinds of ‘mental (cognitive) models’ based on their experience of the environment and their own body. (iv) Human actors are further capable to use symbolic languages to ‘represent’ properties of these mental models encoded in symbolic expressions. Such symbolic encoding presupposes an ‘inner meaning function’ which has to be learned. (v) In the oksimo programming environment one needs for the description of a ‘given state’ two kinds of symbolic expressions: (v.1) Language expressions to describe general properties and relations which are assumed to be ‘given’ (= ‘valid by experience’); (v.2) Language expressions to name concrete quantitative properties (simple math expressions).

This figure shows the idea how to change a given state (situation) by so-called ‘change rules’. A change rule encodes experience from the everyday world under which conditions some properties of a given situation S can be ‘changed’ in a way, that a ‘new situation’ S* comes into being. Generally a given state can change if either language expression is ‘deleted’ from the description or ‘contributed’. Another possibility is realized if one of the given quantitative expressions changes its value. In the above simple situation the only change happens by changing the number of citizens by some growth effect. But, as other examples will demonstrate, everything is possible what is possible in the empirical world.

SOME MORE FEATURES

The basic schema of the oksimo paradigm assumes the following components:

  1. The description of a ‘given situation’ as a ‘start state’.
  2. The description of a ‘vision’ functioning as a ‘goal’ which allows a basic ‘Benchmarking’.
  3. A list of ‘change rules’ which describe the assumed possible changes
  4. An ‘inference engine’ called ‘simulator’: Depending from the number of wanted ‘simulation cycles’ (‘inferences’) the simulator applies the change rules onto a given state S and thereby it is producing a ‘follow up state’ S*, which becomes the new given state. The series of generated states represents the ‘history’ of a simulation. Every follow up state is an ‘inference’ and by definition also a ‘forecast’.

All these features (1) – (4) together constitute a full empirical theory in the sense of the mentioned theory post before.

Let us look to a real simulation.

A REAL SIMULATION

The following example has been run with Oksimo v2.0 (Pre-Release) (353e5). Hopefully we can finish the pre-release to a full release the next few weeks.

A VISION

Name: v2026

Expressions:

The Main-Kinzig County exists.

Math expressions:

YEAR>2025 and YEAR<2027

This simple goal assumes the existence of the Main-Kinzig County for the year 2026.

GIVEN START STATE

Name: StartSimple1

Expressions:

The Main-Kinzig County exists.

The number of citizens is known.

Comparing the number of different years one has computed a growth rate.

Math expressions:

YEAR=2018Number

CITIZENS=418950Amount

GROWTH=0.0023Percentage

The start state makes some simple statements which are assumed to be ‘valid’ in a ‘real given situation’ by the participating natural experts.

CHANGE RULES

In this example there is only one change rules (In principle there can be as many change rules as wanted).

Rule name: Growth1

Probability: 1.0

Conditions:

The Main-Kinzig County exists.

Math conditions:

CITIZENS < 450000

Effects plus:

Effects minus:

Effects math:

CITIZENS=CITIZENS+(CITIZENS*GROWTH)

YEAR=YEAR+1

This change rules is rather simple. It looks only to the fact whether the Main-Kinzig County exists and wether the number of citizens is still below 450000. If this is the case, then the year will be incremented and the number of citizens will be incremented according to an extremely simple formula.

For every named quantity in this simulation (YEAR, GROWTH, CITIZENS) the values are collected for every simulation cycle and therefore can be used for evaluations. In this simple case only the quantities of YEAR and CITIZENS have changes:

Simple linear graph for the quantity named YEAR
Simple linear graph for the quantity named CITIZENS

Here the quick log of simulation cycle round 7 – 9:

Round 7

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2025Number
CITIZENS: 425741.8149741673Amount
GROWTH: 0.0023Percentage

50.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
None

Round 8

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2026Number
CITIZENS: 426721.0211486079Amount
GROWTH: 0.0023Percentage

100.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
YEAR>2025 and YEAR<2027,

Round 9

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2027Number
CITIZENS: 427702.4794972497Amount
GROWTH: 0.0023Percentage

50.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
None

In round 8 one can see, that the simulation announces:

100.00 percent of your vision was achieved by reaching the following states: The Main-Kinzig County exists., And the following math visions: YEAR>2025 and YEAR<2027

From this the natural expert can conclude that his requirements given in the vision are ‘fulfilled’/’satisfied’.

WHAT COMES NEXT?

In a loosely order more examples will follow. Here you find the next one.

POPPER and EMPIRICAL THEORY. A conceptual Experiment


eJournal: uffmm.org
ISSN 2567-6458, 12.March 22 – 16.March 2022, 11:20 h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In a preceding post I have outline the concept of an empirical theory based on a text from Popper 1971. In his article Popper points to a minimal structure of what he is calling an empirical theory. A closer investigation of his texts reveals many questions which should be clarified for a more concrete application of his concept of an empirical theory.

In this post it will be attempted to elaborate the concept of an empirical theory more concretely from a theoretical point of view as well as from an application point of view.

A Minimal Concept of an Empirical Theory

The figure shows the process of (i) observing phenomena, (ii) representing these in expressions of some language L, (iii) elaborating conjectures as hypothetical relations between different observations, (iv) using an inference concept to deduce some forecasts, and (v) compare these forecasts with those observations, which are possible in an assumed situation.

Empirical Basis

As starting point as well as a reference for testing does Popper assume an ’empirical basis’. The question arises what this means.

In the texts examined so far from Popper this is not well described. Thus in this text some ‘assumptions/ hypotheses’ will be formulated to describe some framework which should be able to ‘explain’ what an empirical basis is and how it works.

Experts

Those, who usually are building theories, are scientists, are experts. For a general concept of an ’empirical theory’ it is assumed here that every citizen is a ‘natural expert’.

Environment

Natural experts are living in ‘natural environments’ as part of the planet earth, as part of the solar system, as part of the whole universe.

Language

Experts ‘cooperate’ by using some ‘common language’. Here the ‘English language’ is used; many hundreds of other languages are possible.

Shared Goal (Changes, Time, Measuring, Successive States)

For cooperation it is necessary to have a ‘shared goal’. A ‘goal’ is an ‘idea’ about a possible state in the ‘future’ which is ‘somehow different’ to the given actual situation. Such a future state can be approached by some ‘process’, a series of possible ‘states’, which usually are characterized by ‘changes’ manifested by ‘differences’ between successive states. The concept of a ‘process’, a ‘sequence of states’, implies some concept of ‘time’. And time needs a concept of ‘measuring time’. ‘Measuring’ means basically to ‘compare something to be measured’ (the target) with ‘some given standard’ (the measuring unit). Thus to measure the height of a body one can compare it with some object called a ‘meter’ and then one states that the target (the height of the body) is 1,8 times as large as the given standard (the meter object). In case of time it was during many thousand years customary to use the ‘cycles of the sun’ to define the concept (‘unit’) of a ‘day’ and a ‘night’. Based on this one could ‘count’ the days as one day, two days, etc. and one could introduce further units like a ‘week’ by defining ‘One week compares to seven days’, or ‘one month compares to 30 days’, etc. This reveals that one needs some more concepts like ‘counting’, and associated with this implicitly then the concept of a ‘number’ (like ‘1’, ‘2’, …, ’12’, …) . Later the measuring of time has been delegated to ‘time machines’ (called ‘clocks’) producing mechanically ‘time units’ and then one could be ‘more precise’. But having more than one clock generates the need for ‘synchronizing’ different clocks at different locations. This challenge continues until today. Having a time machine called ‘clock’ one can define a ‘state’ only by relating the state to an ‘agreed time window’ = (t1,t2), which allows the description of states in a successive timely order: the state in the time-window (t1,t2) is ‘before’ the time-window (t2,t3). Then one can try to describe the properties of a given natural environment correlated with a certain time-window, e.g. saying that the ‘observed’ height of a body in time-window w1 was 1.8 m, in a later time window w6 the height was still 1.8 m. In this case no changes could be observed. If one would have observed at w6 1.9 m, then a difference is occurring by comparing two successive states.

Example: A County

Here we will assume as an example for a natural environment a ‘county’ in Germany called ‘Main-Kinzig Kreis’ (‘Kreis’ = ‘county’), abbreviated ‘MKK’. We are interested in the ‘number of citizens’ which are living in this county during a certain time-window, here the year 2018 = (1.January 2018, 31.December 2018). According to the statistical office of the state of Hessen, to which the MKK county belongs, the number of citizens in the MKK during 2018 was ‘418.950’.(cf. [2])

Observing the Number of Citizens

One can ask in which sense the number ‘418.950’ can be understood as an ‘observation statement’? If we understand ‘observation’ as the everyday expression for ‘measuring’, then we are looking for a ‘procedure’ which allows us to ‘produce’ this number ‘418.950’ associated with the unit ‘number of citizens during a year’. As everybody can immediately realize no single person can simply observe all citizens of that county. To ‘count’ all citizens in the county one had to ‘travel’ to all places in the county where citizens are living and count every person. Such a travelling would need some time. This can easily need more than 40 years working 24 hours a day. Thus, this procedure would not work. A different approach could be to find citizens in every of the 24 cities in the MKK [1] to help in this counting-procedure. To manage this and enable some ‘quality’ for the counting, this could perhaps work. An interesting experiment. Here we ‘believe’ in the number of citizens delivered by the statistical office of the state of Hessen [2], but keeping some reservation for the question how ‘good’ this number really is. Thus our ‘observation statement’ would be: “In the year 2018 418.950 citizens have been counted in the MKK (according to the information of the statistical office of the state of Hessen)” This observation statement lacks a complete account of the procedure, how this counting really happened.

Concrete and Abstract Words

There are interesting details in this observation statement. In this observation statement we notice words like ‘citizen’ and ‘MKK’. To talk about ‘citizens’ is not a talk about some objects in the direct environment. What we can directly observe are concrete bodies which we have learned to ‘classify’ as ‘humans’, enriched for example with ‘properties’ like ‘man’, ‘woman’, ‘child’, ‘elderly person’, neighbor’ and the like. Bu to classify someone as a ‘citizen’ deserves knowledge about some official procedure of ‘registering as a citizen’ at a municipal administration recorded in some certified document. Thus the word ‘citizen’ has a ‘meaning’ which needs some ‘concrete procedure to get the needed information’. Thus ‘citizen’ is not a ‘simple word’ but a ‘more abstract word’ with regard to the associated meaning. The same holds for the word ‘MKK’ short for ‘Main-Kinzig Kreis’. At a first glance ‘MKK’ appears as a ‘name’ for some entity. But this entity cannot directly be observed too. One component of the ‘meaning’ of the name ‘MKK’ is a ‘real geographical region’, whose exact geographic extensions have been ‘measured’ by official institutions marked in an ‘official map’ of the state of Hessen. This region is associated with an official document of the state of Hessen telling, that this geographical region has to be understood s a ‘county’ with the name MKK. There exist more official documents defining what is meant with the word ‘county’. Thus the word ‘MKK’ has a rather complex meaning which to understand and to check, whether everything is ‘true’, isn’t easy. The author of this post is living in the MKK and he would not be able to tell all the details of the complete meaning of the name ‘MKK’.

First Lessons Learned

Thus one can learn from these first considerations, that we as citizens are living in a natural environment where we are using observation statements which are using words with potentially rather complex meanings, which to ‘check’ deserves some serious amount of clarification.

Conjectures – Hypotheses

Changes

The above text shows that ‘observations as such’ show nothing of interest. Different numbers of citizens in different years have no ‘message’. But as soon as one arranges the years in a ‘time line’ according to some ‘time model’ the scene is changing: if the numbers of two consecutive years are ‘different’ then this ‘difference in numbers’ can be interpreted as a ‘change’ in the environment, but only if one ‘assumes’ that the observed phenomena (the number of counted citizens) are associated with some real entities (the citizens) whose ‘quantity’ is ‘represented’ in these numbers.[5]

And again, the ‘difference between consecutive numbers’ in a time line cannot be observed or measured directly. It is a ‘second order property’ derived from given measurements in time. Such a 2nd order property presupposes a relationship between different observations: they ‘show up’ in the expressions (here numbers), but they are connected back in the light of the agreed ‘meaning’ to some ‘real entities’ with the property ‘overall quantity’ which can change in the ‘real setting’ of these real entities called ‘citizens’.

In the example of the MKK the statistical office of the state of Hessen computed a difference between two consecutive years which has been represented as a ‘growth factor’ of 0,4%. This means that the number of citizens in the year 2018 will increase until the year 2019 as follows: number-citizens(2019) = number-citizens(2018) + (number of citizens(2018) * growth-factor). This means number-citizens(2019) =418.950 + (418.950 * 0.004) = 418.950 + 1.675,8 = 420.625,8

Applying change repeatedly

If one could assume that the ‘growth rate’ would stay constant through the time then one could apply the growth rate again and again onto the actual number of citizens in the MKK every year. This would yield the following simple table:

YearNumberGrowth Rate
2018418.950,00,0040
2019420.625,80
2020422.308,30
2021423.997,54
2022425.693,53
2023427.396,30
Table: Simplified description of the increase of the number of citizens in the Main-Kinzig county in Germany with an assumed growth rate of 0,4% per year.

As we know from reality an assumption of a fixed growth rate for complex dynamic systems is not very probable.

Theory

Continuing the previous considerations one has to ask the question, how the layout of a ‘complete empirical theory’ would look like?

As I commented in the preceding post about Popper’s 1971 article about ‘objective knowledge’ there exists today no one single accepted framework for a formalized empirical theory. Therefore I will stay here with a ‘bottom-up’ approach using elements taken from everyday reasoning.

What we have until now is the following:

  1. Before the beginning of a theory building process one needs a group of experts being part of a natural environment using the same language which share a common goal which they want to enable.
  2. The assumed natural environment is assumed from the experts as being a ‘process’ of consecutive states in time. The ‘granularity’ of the process depends from the used ‘time model’.
  3. As a starting point they collect a set of statements talking about those aspects of a ‘selected state’ at some time t which they are interested in.
  4. This set of statements describes a set of ‘observable properties’ of the selected state which is understood as a ‘subset’ of the properties of the natural environment.
  5. Every statement is understood by the experts as being ‘true’ in the sense, that the ‘known meaning’ of a statement has an ‘observable counterpart’ in the situation, which can be ‘confirmed’ by each expert.
  6. For each pair of consecutive states it holds that the set of statements of each state can be ‘equal’ or ‘can show ‘differences’.
  7. A ‘difference’ between sets of statements can be interpreted as pointing to a ‘change in the real environment’.[5]
  8. Observed differences can be described by special statements called ‘change statements’ or simply ‘rules’.
  9. A change statement has the format ‘IF a set of statements ST* is a subset of the statements ST of a given state S, THEN with probability p, a set of statements ST+ will be added to the actual state S and a set of statements ST- will be removed from the statements ST of a given state S. This will result in a new succeeding state S* with the representing statements ST – (ST-) + (ST+) depending from the assumed probability p.
  10. The list of change statements is an ‘open set’ according to the assumption, that an actual state is only a ‘subset’ of the real environment.
  11. Until now we have an assumed state S, an assumed goal V, and an open set of change statements X.
  12. Applying change statements to a given state S will generate a new state S*. Thus the application of a subset X’ of the open set of change statements X onto a given state S will here be called ‘generating a new state by a procedure’. Such a state-generating-procedure can be understood as an ‘inference’ (like in logic) oder as a ‘simulation’ (like in engineering).[6]
  13. To write this in a more condensed format we can introduce some signs —– S,V ⊩ ∑ X S‘ —– saying: If I have some state S and a goal V then the simulator will according to the change statements X generate a new state S’. In such a setting the newly generated state S’ can be understood as a ‘theorem’ which has been derived from the set of statements in the state S which are assumed to be ‘true’. And because the derived new state is assumed to happen in some ‘future’ ‘after’ the ‘actual state S’ this derived state can also be understood as a ‘forecast’.
  14. Because the experts can change all the time all parts ‘at will’ such a ‘natural empirical theory’ is an ‘open entity’ living in an ongoing ‘communication process’.
Second Lessons Learned

It is interestingly to know that from the set of statements in state S, which are assumed to be empirically true, together with some change statements X, whose proposed changes are also assumed to be ‘true’, and which have some probability P in the domain [0,1], one can forecast a set of statements in the state S* which shall be true, with a certainty being dependent from the preceding probability P and the overall uncertainty of the whole natural environment.

Confirmation – Non-Confirmation

A Theory with Forecasts

Having reached the formulation of an ordinary empirical theory T with the ingredients <S,V,X,⊩ > and the derivation concept S,V ⊩ ∑ X S‘ it is possible to generate theorems as forecasts. A forecast here is not a single statement st* but a whole state S* consisting of a finite set of statements ST* which ‘designate’ according to the ‘agreed meaning’ a set of ‘intended properties’ which need a set of ‘occurring empirical properties’ which can be observed by the experts. These observations are usually associated with ‘agreed procedures of measurement’, which generate as results ‘observation statements’/ ‘measurement statements’.

Within Time

Experts which are cooperating by ‘building’ an ordinary empirical theory are themselves part of a process in time. Thus making observations in the time-window (t1,t2) they have a state S describing some aspects of the world at ‘that time’ (t1,t2). When they then derive a forecast S* with their theory this forecast describes — with some probability P — a ‘possible state of the natural environment’ which is assumed to happen in the ‘future’. The precision of the predicted time when the forecasted statements in S* should happen depends from the assumptions in S.

To ‘check’ the ‘validity’ of such a forecast it is necessary that the overall natural process reaches a ‘point in time’ — or a time window — indicated by the used ‘time model’, where the ‘actual point in time’ is measured by an agreed time machine (mechanical clock). Because there is no observable time without a time machine the classification of a certain situation S* being ‘now’ at the predicted point of time depends completely from the used time machine.[7]

Given this the following can happen: According to the used theory a certain set of statements ST* is predicted to be ‘true’ — with some probability — either ‘at some time in the future’ or in the time-window (t1,t2) or at a certain point in time t*.

Validating Forecasts

If one of these cases would ‘happen’ then the experts would have the statements ST* of their forecast and a real situation in their natural environment which enables observations ‘Obs’ which are ‘translated’ into appropriate ‘observation statements’ STObs. The experts with their predicted statements ST* know a learned agreed meaning M* of their predicted statements ST* as intended-properties M* of ST*. The experts have also learned how they relate the intended meaning M* to the meaning MObs from the observation statements STobs. If the observed meaning MObs ‘agrees sufficiently well’ with the intended meaning M* then the experts would agree in a statement, that the intended meaning M* is ‘fulfilled’/ ‘satisfied’/ ‘confirmed’ by the observed meaning MObs. If not then it would stated that it is ‘not fulfilled’/ ‘not satisfied’/ ‘not confirmed’.

The ‘sufficient fulfillment’ of the intended meaning M* of a set of statements ST* is usually translated in a statement like “The statements ST* are ‘true'”. In the case of ‘no fulfillment’ it is unclear: this can be interpreted as ‘being false’ or as ‘being unclear’: No clear case of ‘being true’ and no clear case of ‘being false’.

Forecasting the Number of Citizens

In the used simple example we have the MKK county with an observed number of citizens in 2018 with 418950. The simple theory used a change statement with a growth factor of 0.4% per year. This resulted in the forecast with the number 420.625 citizens for the year 2019.

If the newly counting of the number of citizens in the years 2019 would yield 420.625, then there would be a perfect match, which could be interpreted as a ‘confirmation’ saying that the forecasted statement and the observed statement are ‘equal’ and therefore the theory seems to match the natural environment through the time. One could even say that the theory is ‘true for the observed time’. Nothing would follow from this for the unknown future. Thus the ‘truth’ of the theory is not an ‘absolute’ truth but a truth ‘within defined limits’.

We know from experience that in the case of forecasting numbers of citizens for some region — here a county — it is usually not so clear as it has been shown in this example.

This begins with the process of counting. Because it is very expensive to count the citizens of all cities of a county this happens only about every 20 years. In between the statistical office is applying the method of ‘forecasting projection’.[9] The state statistical office collects every year ‘electronically’ the numbers of ‘birth’, ‘death’, ‘outflow’, and ‘inflow’ from the individual cities and modifies with these numbers the last real census. In the case of the state of Hessen this was the year 2011. The next census in Germany will happen May 2022.[10] For such a census the data will be collected directly from the registration offices from the cities supported by a control survey of 10% of the population.

Because there are data from the statistical office of the state of Hessen for June 2021 [8:p.9] with saying that the MKK county had 421 936 citizens at 30. June 2021 we can compare this number with the theory forecast for the year 2021 with 423 997. This shows a difference in the numbers. The theory forecast is ‘higher’ than the observed forecast. What does this mean?

Purely arithmetically the forecast is ‘wrong’. The responsible growth factor is too large. If one would ‘adjust’ it in a simplified linear way to ‘0.24%’ then the theory could get a forecast for 2021 with 421 973 (observed: 421 936), but then the forecast for 2019 would be 419 955 (instead of 420 625).

This shows at least the following aspects:

  1. The empirical observations as such can vary ‘a little bit’. One had to clarify which degree of ‘variance’ is due to the method of measurement and therefore this variance should be taken into account for the evaluation of a theoretical forecast.
  2. As mentioned by the statistical office [9] there are four ‘factors’ which influence the final number of citizens in a region: ‘birth’, ‘death’, ‘outflow’, and ‘inflow’. These factors can change in time. Under ‘normal conditions’ the birth-rate and the death-rate are rather ‘stable’, but in case of an epidemic situation or even war this can change a lot. Outflow and inflow are very dynamic depending from many factors. Thus this can influence the growth factor a lot and these factors are difficult to forecast.
Third lessons Learned

Evaluating the ‘relatedness’ of some forecast F of an empirical theory T to the observations O in a given real natural environment is not a ‘clear-cut’ case. The ‘precision’ of such a relatedness depends from many factors where each of these factors has some ‘fuzziness’. Nevertheless as experience shows it can work in a limited way. And, this ‘limited way’ is the maximum we can get. The most helpful contribution of an ‘ordinary empirical theory’ seems to be the forecast of ‘What will happen if we have a certain set of assumptions’. Using such a forecast in the process of the experts this can help to improve to get some ‘informed guesses’ for planning.

Forecast

The next post will show, how this concept of an ordinary empirical theory can be used by applying the oksimo paradigm to a concrete case. See HERE.

Comments

[1] Cities of the MKK-county: 24, see: https://www.wegweiser-kommune.de/kommunen/main-kinzig-kreis-lk

[2] Forecast for development of the number of citizens in the MMK starting with 2018, See: the https://statistik.hessen.de/zahlen-fakten/bevoelkerung-gebiet-haushalte-familien/bevoelkerung/tabellen

[3] Karl Popper, „A World of Propensities“,(1988) and „Towards an Evolutionary Theory of Knowledge“, (1989) in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (1990, repr. 1995)

[4] Karl Popper, „All Life is Problem Solving“, original a lecture 1991 in German, the first tome published (in German) „Alles Leben ist Problemlösen“ (1994), then in the book „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[5] This points to the concept of ‘propensity’ which the late Popper has discussed in the papers [3] and [4].

[6] This concept of a ‘generator’ or an ‘inference’ reminds to the general concept of Popper and the main stream philosophy of a logical derivation concept where a ‘set of logical rules’ defines a ‘derivation concept’ which allows the ‘derivation/ inference’ of a statement s* as a ‘theorem’ from an assumed set of statements S assumed to be true.

[7] The clock-based time is in the real world correlated with certain constellations of the real universe, but this — as a whole — is ‘changing’!

[8] Hessisches Statistisches Landesamt, “Die Bevölkerung der hessischen
Gemeinden am 30. Juni 2021. Fortschreibungsergebnisse Basis Zensus 09. Mai 2011″, Okt. 2021, Wiesbaden, URL: https://statistik.hessen.de/sites/statistik.hessen.de/files/AI2_AII_AIII_AV_21-1hj.pdf

[9] Method of the forward projection of the statistical office of the State of Hessen: “Bevölkerung: Die Bevölkerungszahlen sind Fortschreibungsergebnisse, die auf den bei der Zensuszählung 2011
ermittelten Bevölkerungszahlen basieren. Durch Auswertung von elektronisch übermittelten Daten für Geburten und Sterbefälle durch die Standesämter, sowie der Zu- und Fortzüge der Meldebehörden, werden diese nach einer bundeseinheitlichen Fortschreibungsmethode festgestellt. Die Zuordnung der Personen zur Bevölkerung einer Gemeinde erfolgt nach dem Hauptwohnungsprinzip (Bevölkerung am Ort der alleinigen oder der Hauptwohnung).”([8:p.2]

[10] Statistical Office state of Hessen, Next census 2022: https://statistik.hessen.de/zahlen-fakten/zensus/zensus-2022/zensus-2022-kurz-erklaert

OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))

 

 

HMI Analysis for the CM:MI paradigm. Part 1

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 25, 2021
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Last change: March 16, 2021 (Some minor corrections)
HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 1
Introduction

Since January 2021 an intense series of posts has been published how the new ideas manifested in the new software published in this journal  can adequately be reflected in the DAAI theoretical framework. Because these ideas included in the beginning parts of philosophy, philosophy of science, philosophy of engineering, these posts have been first published in the German Blog of the author (cognitiveagent.org). This series of posts started with an online lecture for students of the University of Leipzig together with students of the ‘Hochschule für Technik, Wirtschaft und Kultur (HTWK)’ January 12, 2021.  Here is the complete list of posts:

In what follows in this text is an English version of the following 5 posts. This is not a 1-to-1 translation but rather a new version:

HMI Analysis as Part of Systems Engineering

HMI analysis as pat of systems engineering illustrated with the oksimo software
HMI analysis for the CM:MI paradigm illustrated with the oksimo software concept

As described in the original DAAI theory paper the whole topic of HMI is here understood as a job within the systems engineering paradigm.

The specification process is a kind of a ‘test’ whether the DAAI format of the HMI analysis works with this new  application too.

To remember, the main points of the integrated engineering concept are the following ones:

  1. A philosophical  framework (Philosophy of Science, Philosophy of Engineering, …), which gives the fundamentals for such a process.
  2. The engineering process as such where managers and engineers start the whole process and do it.
  3. After the clarification of the problem to be solved and a minimal vision, where to go, it is the job of the HMI analysis to clarify which requirements have to be fulfilled, to find an optimal solution for the intended product/ service. In modern versions of the HMI analysis substantial parts of the context, i.e. substantial parts of the surrounding society, have to be included in the analysis.
  4. Based on the HMI analysis  in  the logical design phase a mathematical structure has to be identified, which integrates all requirements sufficiently well. This mathematical structure has to be ‘map-able’ into a set of algorithms written in  appropriate programming languages running on  an appropriate platform (the mentioned phases Problem, Vision, HMI analysis, Logical Design are in reality highly iterative).
  5. During the implementation phase the algorithms will be translated into a real working system.
Which Kinds of Experts?

While the original version of the DAAI paper is assuming as ‘experts’ only the typical manager and engineers of an engineering process including all the typical settings, the new extended version under the label CM:MI (Collective Man-Machine Intelligence) has been generalized to any kind of human person as an expert, which allows a maximum of diversity. No one is the ‘absolute expert’.

Collective Intelligence

As ‘intelligence’ is understood here the whole of knowledge, experience, and motivations which can be the moving momentum inside of a human person. As ‘collective’  is meant  the situation, where more than one person is communicating with other persons to share it’s intelligence.

Man-Machine Symbiosis

Today there are discussions going around  about the future of man and (intelligent) machines. Most of these discussions are very weak because they are lacking clear concepts of intelligent machines as well of what is a human person. In the CM:MI paradigm the human person (together with all other biological systems)  is seen at the center of the future  (by  reasons based on modern theories of biological evolution) and the  intelligent machines are seen as supporting devices (although it is assumed here to use ‘strong’ intelligence compared to the actual ‘weak’ machine intelligence today).

CM:MI by Design

Although we know, that groups of many people are ‘in principal’ capable of sharing intelligence to define problems, visions, constructing solutions, testing the solutions etc., we know too, that the practical limits of the brains and the communication are quite narrow. For special tasks a computer can be much, much better. Thus the CM:MI paradigm provides an environment for groups of people to do the shared planning and testing in a new way, only using normal language. Thus the software is designed to enable new kinds of shared knowledge about shared common modes of future worlds. Only with such a truly general framework the vision of a sustainable society as pointed out by the United Nations since 1992 can become real.

Continuation

Look here.

KOMEGA REQUIREMENTS: From the minimal to the basic version

ISSN 2567-6458, 18.October  2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document is part of the Case Studies section.

CONTENT

Here we present the ideas how to extend the minimal version to a first basic version. At least two more advanced levels will follow.

VIDEO (EN)

(Last change: Oct 17, 2020)

VIDEO(DE)

(last change: Oct 18, 2020)

CASE STUDIES

eJournal: uffmm.org
ISSN 2567-6458, 4.May  – 16.March   2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this section several case studies will  be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept  now can be associated with any kind of embedded algorithmic intelligence [EAI]  (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.

COLLECTION OF PAPERS

There exists only a loosely  order  between the  different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.

 

JANUARY 2021 – OCTOBER 2021

  1. HMI Analysis for the CM:MI paradigm. Part 1 (Febr. 25, 2021)(Last change: March 16, 2021)
  2. HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision (Febr. 27, 2021)
  3. HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories (March 2, 2021)
  4. HMI Analysis for the CM:MI paradigm. Part 4. Tool Based Development with Testing and Gaming (March 3-4, 2021, 16:15h)

APRIL 2020 – JANUARY 2021

  1. From Men to Philosophy, to Empirical Sciences, to Real Systems. A Conceptual Network. (Last Change Nov 8, 2020)
  2. FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
  3. CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology) (July 28, 2020)
  4. A first GCA open research project [GCA-OR No.1].  This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
  5. Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors  where  certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly  not  separated from society (April-9, 2020): case1-population-start-part0-v1
  6. Bootstrapping some Citizens. This  paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
  7. Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own  (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
  8. The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
  9. CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model (June 19.-30., 2020)
  10. KOMEGA REQUIREMENTS No.1. Basic Application Scenario (last change: August 11, 2020)
  11. KOMEGA REQUIREMENTS No.2. Actor Story Overview (last change: August 12, 2020)
  12. KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S (last change: August 12, 2020)
  13. The Simulator as a Learning Artificial Actor [LAA]. Version 1 (last change: August 23, 2020)
  14. KOMEGA REQUIREMENTS No.4, Version 1 (last change: August 26, 2020)
  15. KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario (last change: August 28, 2020)
  16. Extended Concept for Meaning Based Inferences. Version 1 (last change: 30.April 2020)
  17. Extended Concept for Meaning Based Inferences – Part 2. Version 1 (last change: 1.September 2020)
  18. Extended Concept for Meaning Based Inferences – Part 2. Version 2 (last change: 2.September 2020)
  19. Actor Epistemology and Semiotics. Version 1 (last change: 3.September 2020)
  20. KOMEGA REQUIREMENTS No.4, Version 3. Basic Application Scenario (last change: 4.September 2020)
  21. KOMEGA REQUIREMENTS No.4, Version 4. Basic Application Scenario (last change: 10.September 2020)
  22. KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario (last change: 13.September 2020)
  23. KOMEGA REQUIREMENTS: From the minimal to the basic Version. An Overview (last change: Oct 18, 2020)
  24. KOMEGA REQUIREMENTS: Basic Version with optional on-demand Computations (last change: Nov 15,2020)
  25. KOMEGA REQUIREMENTS:Interactive Simulations (last change: Nov 12,2020)
  26. KOMEGA REQUIREMENTS: Multi-Group Management (last change: December 13, 2020)
  27. KOMEGA-REQUIREMENTS: Start with a Political Program. (last change: November 28, 2020)
  28. OKSIMO SW: Minimal Basic Requirements (last change: January 8, 2021)

 

 

ACTOR-ACTOR INTERACTION ANALYSIS – A rough Outline of the Blueprint

eJournal: uffmm.org,
ISSN 2567-6458, 13.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last corrections: 14.February 2019 (add some more keywords; added  emphasizes for central words)

Change: 5.May 2019 (adding the the aspect of simulation and gaming; extending the view of the driving actors)

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the blueprint  of the whole  AAI analysis process. Here I leave out the topic of actor models (AM); the aspect of  simulation and gaming is mentioned only shortly. For these topics see other posts.

THE AAI ANALYSIS BLUEPRINT

Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.
Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.

The Actor-Actor Interaction (AAI) analysis is understood here as part of an  embracing  systems engineering process (SEP), which starts with the statement of a problem (P) which includes a vision (V) of an improved alternative situation. It has then to be analyzed how such a new improved situation S+ looks like; how one can realize certain tasks (T)  in an improved way.

DRIVING ACTORS

The driving actors for such an AAI analysis are at least one  stakeholder (STH) which communicates a problem P and an envisioned solution (ES) to an  expert (EXPaai) with a sufficient AAI experience. This expert will take   the lead in the process of transforming the problem and the envisioned  solution into a working solution (WS).

In the classical industrial case the stakeholder can be a group of managers from some company and the expert is also represented by a whole team of experts from different disciplines, including the AAI perspective as leading perspective.

In another case which  I will call here the  communal case — e.g. a whole city —      the stakeholder as well as the experts are members of the communal entity.   As   in the before mentioned cases there is some commonly accepted problem P combined  with a first envisioned solution ES, which shall be analyzed: what is needed to make it working? Can it work at all? What are costs? And many other questions can arise. The challenge to include all relevant experience and knowledge from all participants is at the center of the communication and to transform this available knowledge into some working solution which satisfies all stated requirements for all participants is a central  condition for the success of the project.

EPISTEMOLOGY

It has to be taken into account that the driving actors are able to do this job because they  have in their bodies brains (BRs) which in turn include  some consciousness (CNS). The processes and states beyond the consciousness are here called ‘unconscious‘ and the set of all these unconscious processes is called ‘the Unconsciousness’ (UCNS).

For more details to the cognitive processes see the post to the philosophical framework as well as the post bottom-up process. Both posts shall be integrated into one coherent view in the future.

SEMIOTIC SUBSYSTEM

An important set of substructures of the unconsciousness are those which enable symbolic language systems with so-called expressions (L) on one side and so-called non-expressions (~L) on the other. Embedded in a meaning relation (MNR) does the set of non-expressions ~L  function as the meaning (MEAN) of the expressions L, written as a mapping MNR: L <—> ~L. Depending from the involved sensors the expressions L can occur either as acoustic events L_spk, or as visual patterns written L_txt or visual patterns as pictures L_pict or even in other formats, which will not discussed here. The non-expressions can occur in every format which the brain can handle.

While written (symbolic) expressions L are only associated with the intended meaning through encoded mappings in the brain,  the spoken expressions L_spk as well as the pictorial ones L_pict can show some similarities with the intended meaning. Within acoustic  expressions one can ‘imitate‘ some sounds which are part of a meaning; even more can the pictorial expressions ‘imitate‘ the visual experience of the intended meaning to a high degree, but clearly not every kind of meaning.

DEFINING THE MAIN POINT OF REFERENCE

Because the space of possible problems and visions it nearly infinite large one has to define for a certain process the problem of the actual process together with the vision of a ‘better state of the affairs’. This is realized by a description of he problem in a problem document D_p as well as in a vision statement D_v. Because usually a vision is not without a given context one has to add all the constraints (C) which have to be taken into account for the possible solution.  Examples of constraints are ‘non-functional requirements’ (NFRs) like “safety” or “real time” or “without barriers” (for handicapped people). Part of the non-functional requirements are also definitions of win-lose states as part of a game.

AAI ANALYSIS – BASIC PROCEDURE

If the AAI check has been successful and there is at least one task T to be done in an assumed environment ENV and there are at least one executing actor A_exec in this task as well as an assisting actor A_ass then the AAI analysis can start.

ACTOR STORY (AS)

The main task is to elaborate a complete description of a process which includes a start state S* and a goal state S+, where  the participating executive actors A_exec can reach the goal state S+ by doing some actions. While the imagined process p_v  is a virtual (= cognitive/ mental) model of an intended real process p_e, this intended virtual model p_e can only be communicated by a symbolic expressions L embedded in a meaning relation. Thus the elaboration/ construction of the intended process will be realized by using appropriate expressions L embedded in a meaning relation. This can be understood as a basic mapping of sensor based perceptions of the supposed real world into some abstract virtual structures automatically (unconsciously) computed by the brain. A special kind of this mapping is the case of measurement.

In this text especially three types of symbolic expressions L will be used: (i) pictorial expressions L_pict, (ii) textual expressions of a natural language L_txt, and (iii) textual expressions of a mathematical language L_math. The meaning part of these symbolic expressions as well as the expressions itself will be called here an actor story (AS) with the different modes  pictorial AS (PAS), textual AS (TAS), as well as mathematical AS (MAS).

The basic elements of an  actor story (AS) are states which represent sets of facts. A fact is an expression of some defined language L which can be decided as being true in a real situation or not (the past and the future are special cases for such truth clarifications). Facts can be identified as actors which can act by their own. The transformation from one state to a follow up state has to be described with sets of change rules. The combination of states and change rules defines mathematically a directed graph (G).

Based on such a graph it is possible to derive an automaton (A) which can be used as a simulator. A simulator allows simulations. A concrete simulation takes a start state S0 as the actual state S* and computes with the aid of the change rules one follow up state S1. This follow up state becomes then the new actual state S*. Thus the simulation constitutes a continuous process which generally can be infinite. To make the simulation finite one has to define some stop criteria (C*). A simulation can be passive without any interruption or interactive. The interactive mode allows different external actors to select certain real values for the available variables of the actual state.

If in the problem definition certain win-lose states have been defined then one can turn an interactive simulation into a game where the external actors can try to manipulate the process in a way as to reach one of the defined win-states. As soon as someone (which can be a team) has reached a win-state the responsible actor (or team) has won. Such games can be repeated to allow accumulation of wins (or loses).

Gaming allows a far better experience of the advantages or disadvantages of some actor story as a rather lose simulation. Therefore the probability to detect aspects of an actor story with their given constraints is by gaming quite high and increases the probability to improve the whole concept.

Based on an actor story with a simulator it is possible to increase the cognitive power of exploring the future even more.  There exists the possibility to define an oracle algorithm as well as different kinds of intelligent algorithms to support the human actor further. This has to be described in other posts.

TAR AND AAR

If the actor story is completed (in a certain version v_i) then one can extract from the story the input-output profiles of every participating actor. This list represents the task-induced actor requirements (TAR).  If one is looking for concrete real persons for doing the job of an executing actor the TAR can be used as a benchmark for assessing candidates for this job. The profiles of the real persons are called here actor-actor induced requirements (AAR), that is the real profile compared with the ideal profile of the TAR. If the ‘distance’ between AAR and TAR is below some threshold then the candidate has either to be rejected or one can offer some training to improve his AAR; the other option is to  change the conditions of the TAR in a way that the TAR is more closer to the AARs.

The TAR is valid for the executive actors as well as for the assisting actors A_ass.

CONSTRAINTS CHECK

If the actor story has in some version V_i a certain completion one has to check whether the different constraints which accompany the vision document are satisfied through the story: AS_vi |- C.

Such an evaluation is only possible if the constraints can be interpreted with regard to the actor story AS in version vi in a way, that the constraints can be decided.

For many constraints it can happen that the constraints can not or not completely be decided on the level of the actor story but only in a later phase of the systems engineering process, when the actor story will be implemented in software and hardware.

MEASURING OF USABILITY

Using the actor story as a benchmark one can test the quality of the usability of the whole process by doing usability tests.

 

 

 

 

 

 

 

 

 

 

 

AAI THEORY V2 – DEFINING THE CONTEXT

eJournal: uffmm.org,
ISSN 2567-6458, 24.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the second chapter where you have to define the context of the problem, which should be analyzed.

DEFINING THE CONTEXT OF PROBLEM P

  1. A defined problem P identifies at least one property associated with  a configuration which has a lower level x than a value y inferred by an accepted standard E.
  2. The property P is always part of some environment ENV which interacts with the problem P.
  3. To approach an improved configuration S measured by  some standard E starting with a  problem P one  needs a process characterized by a set of necessary states Q which are connected by necessary changes X.
  4. Such a process can be described by an actor story AS.
  5. All properties which belong to the whole actor story and therefore have to be satisfied by every state q of the actor story  are called  non-functional process requirements (NFPRs). If required properties are are associate with only one state but for the whole state, then these requirements are called non-functional state requirements (NFSRs).
  6. An actor story can include many different sequences, where every sequence is called a path PTH.  A finite set of paths can represent a task T which has to be fulfilled. Within the environment of the defined problem P it mus be possible to identify at least one task T to be realized from some start state to some goal state. The realization of a task T is assumed to be ‘driven’ by input-output-systems which are called actors A.
  7. Additionally it mus be possible to identify at least one executing actor A_exec doing a  task and at least one actor assisting A_ass the executing actor to fulfill the task.
  8. A state q represents all needed actors as part of the associated environment ENV. Therefore a  state q can be analyzed as a network of elements interacting with each other. But this is only one possible structure for an analysis besides others.
  9. For the   analysis of a possible solution one can distinguish at least two overall strategies:
    1. Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
    2. Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.

EXAMPLE

The mayor of a city has identified as a  problem the relationship between the actual population number POP,    the amount of actual available  living space LSP0, and the  amount of recommended living space LSPr by some standard E.  The population of his city is steadily interacting with populations in the environment: citizens are moving into the environment MIGR- and citizens from the environment are arriving MIGR+. The population,  the city as well as the environment can be characterized by a set of parameters <P1, …, Pn> called a configuration which represents a certain state q at a certain point of time t. To convert the actual configuration called a start state q0 to a new configuration S called a goal state q+ with better values requires the application of a defined set of changes Xs which change the start state q0 stepwise into a sequence of states qi which finally will end up in the desired goal state q+. A description of all these states necessary for the conversion of the start state q0 into the goal state q+ is called here an actor story AS. Because a democratic elected  mayor of the city wants to be ‘liked’ by his citizens he will require that this conversion process should end up in a goal state which is ‘not harmful’ for his citizens, which should support a ‘secure’ and ‘safety’ environment, ‘good transportation’ and things like that. This illustrates non-functional state requirements (NFSRs). Because the mayor wants also not to much trouble during the conversion process he will also require some limits for the whole conversion process, this is for the whole actor story. This illustrates non-functional process requirements (NFPRs). To realize the intended conversion process the mayor needs several executing actors which are doing the job and several other assistive actors helping the executing actors. To be able to use the available time and resources ‘effectively’ the executing actors need defined tasks which have to be realized to come from one state to the next. Often there are more than one sequences of states possible either alternatively or in parallel. A certain state at a certain point of time t can be viewed as a network where all participating actors are in many ways connected with each other, interacting in several ways and thereby influencing each other. This realizes different kinds of communications with different kinds of contents and allows the exchange of material and can imply the change of the environment. Until today the mayors of cities use as their preferred strategy to realize conversion processes selected small teams of experts doing their job in a top-down manner leaving the citizens more or less untouched, at least without a serious participation in the whole process. From now on it is possible and desirable to twist the strategy from top-down to bottom up. This implies that the selected experts enable a broad communication with potentially all citizens which are touched by a conversion and including  the knowledge, experience, skills, visions etc. of these citizens  by applying new methods possible in the new digital age.