Binary and genetic code

Consider the following: binary code can be optimized to a problem through variation given enough iterations (in the same way natural selection can give rise to a solution through genetic variation). An example of this would be decision trees, or hyperparameter optimization.

However: the structures that arise from genetic code are self-assemblying and influenced by development. For example, how lower biological units form a higher structure is bound to the laws of physics and biochemistry. However, computer instructions occur as an emergent property of the underlying electricity in the circuits - but, those instructions being binary means any variation can only possibly come from self-propagation. In a certain sense: if the algorithmic design space of binary is limited by itself and only itself, a self-assemblying dynamic system is limited by either 1) the set of implicit or explicit rules that configure the self-assemblying system, or 2) the data that we input to the system. Nowadays, we are deriving 1 from 2, through “deep learning”, but that’s the wrong approach.

Take an example of synthetic “life”: Conway’s game. Complexity arises from simple rules, but precisely this simplicity determines the design space of the game in a beautiful way. It turns out, those rules are also autocatalytic, in that they can generate stages of a program with causality (eg: perpetural motions, certain figures that repeat themselve and dbehave in predictable ways, states where both forward and backward status can be predicted, etc).

In our case, in our modern day paradigm of so-called AI, our solution instead consists on extract patterns from data fed to the system through, basically, fancy statistics. That’s fine - but it turns out, it’s not the recipe towards a general problem solver. In fact, these models depend on overfitting over a large landscape of offered variation, basically: a language model (such as GPT-3) is only good because it has been exposed to all possible input in english Google has been able to get in there, but it’s incapable of understanding basic logical and or semantic content. That is: it doesn’t have a mental model of chair, other than what chair is usually used for in a sentence (as a noun, statistically more likely to be accompanied of other words, or as a set of pixels, etc).

Again, the problem might be that mental models can’t really arise from models other than the actual perception and interaction with the real world.

Apparently, I’m not the only one that believes that agency can only be derived from the world itself

Adenda

Reading Free Agents, I didn’t think about genetic regulation in terms of logic gates - maybe because the biochemical processes underlying it are almost stochastic, since they work at the micro scale. Anyway, go read that book if you are intereted in the topic, it’s excellent.

Backlinking