Skip to content

Latest commit

 

History

History
55 lines (28 loc) · 7.31 KB

oens_of_algorithms.md

File metadata and controls

55 lines (28 loc) · 7.31 KB

Open-ended natural selection of interacting code-data-dual algorithms as a property analogous to Turing completeness (also on Novel stable complexity emegrence)

Peter Zagubisalo, November 2019, minor update October 2021.

The goal of this article is to promote an unsolved mathematical modelling problem (not a math problem or question). And unlike math questions it still doesn't have a formal definition. But I still find it clear enough and quite interesting. I came to this modelling problem from a philosophy direction but the problem is interesting in itself.

Preamble

The notion of Turing completeness is a formalization of computability and algorithms (that previously were performed by humans and DNA). There are different formalizations (incl. Turing machine, μ-recursive functions and λ-calculus) but they all share the Turing completeness property and can perform equivalent algorithms. Thus they form an equivalence class.

The open-ended evolution (OEE) is a not very popular research program which goal is to build an artificial life model with natural selection which evolution doesn't stop on some level of complexity but can progress further (ultimately to the intelligent agents after some enormous simulation time). I'm not aware of the state of the progress of open-endedness criteria formulation but I'm almost sure that it's still doesn't exist: as it's either connected to results of a successful simulation or to actually understanding and confirming what is required for open-endedness (I haven't heard of either).

The modelling problem

Just as algorithms performed by humans were formalized and property of Turing completeness was defined: the same formalization presumably can be done to the open-ended evolution observed in nature. It went from precellular organisms to unicellular organisms and finally to Homo sapiens driven by natural selection postulates (reproduction-doubling, heredity, variation-random, selection-death, individuals-and-environment/individuals-are-environment). The Red Queen hypothesis and cooperation-competition balance resulted in increasing complexity. Open-endedness property here is analogous to Turing completeness property. It could be formalized differently but it still would form an equivalence class.

And the concise formulation of this process would be something like Open-ended natural selection of interacting code-data-dual algorithms.

Code-data duality is needed for algorithms being able to modify each other or even themselves. I can guess that open-endedness may incorporate some weaker "future potency" form of Turing completeness (if to assume discrete ontology with finite space and countable-infinite time then algorithms can became arbitrary complex and access infinite memory only in infinity time limit).

Please consider if it's an interesting mathematical modelling problem for research and share your thoughts.

Appendix: My contribution to open-ended evolution research program

My contribution to open-ended evolution research program comes from philosophy direction. The minimal model with Open-ended natural selection of interacting code-data-dual algorithms (or an equivalence class of minimal models) is a quite good canditate for a model of the Universe on the deepest level - as models with OEE are models of novel stable complexity emegrence (NSCE). Desire for NSCE explanation comes from reformulated ancient question “why is there something rather than nothing?”. Reformulated into: “why these structures exist instead of other?” And at the moment we really don't have a better mechanism-explanation for NSCE (in general) than natural selection. It should not only emerge but stay in a stable state too. It's intuitive that we can investigate very simple models for being suitable to contain OEE - as it's philosophically intuitive for a deepest level of the Universe to be relatively simple with even space dimensions and a big part of the laws of nature being emergent (formed as a result of natural selection for a very long time). We can even assume beginning of the Universe from a very simple (may be even “singular”) state that with time became more complex via dynamic with Natural Selection postulates: reproduction, heredity, variation aka random, selection aka death, individuals and (are) environment. Novelty and complication of structure comes from random-variation influensing heredity laws (code-data-dual algorithms reproducing and partially randomly modifying each other). Hence simple and ontologically basic models seem to be promising investigation direction for OEE research program (and may make it easier to solve).

Appendix: Novel stable complexity emegrence

Worth noting that it's also important to explore other ways the novel stable complexity can emerge. Before natural selection was discovered it was natural to believe-assume that the entire universe was created by primordial general intelligence (aka God) as intelligent design was the only known thing capable of NSCE (albeit being a far from ideal explanation). Evolution and natural selection (NS) is the best explanation for NSCE that we have at the moment: an endless process of survival and accumulation of novelty. But it's possible that there are other way of novelty emergence that are better than NS. So it's worth be open and keep abreast.

Appendix: Possible open-ended evolution research directions (self-reference, quantum computers, discrete ontology might not be enough)

  • Self-referential basis of undecidable dynamics: from The Liar Paradox and The Halting Problem to The Edge of Chaos,
  • The discrete ontology might not be enough to express our current universe. See discussion for “Is bounded-error quantum polynomial time (BQP) class can be polynomially solved on machine with discrete ontology?”:

    What is your opinion and thoughts about possible ways to get an answer whether problems that are solvable on quantum computer within polynomial time (BQP) can be solved withing polynomial time on hypothetical machine that has discrete ontology? The latter means that it doesn't use continuous manifolds and such. It only uses discrete entities and maybe rational numbers as in discrete probability theory? By discrete I meant countable.

Further info links

Discussion