Thoughts on the Algorithmic View of Reality [-]
Here I'm continuing on with the topic of viewing reality through algorithms, which I talked about in my last post.
Finding decent examples and building a vocabulary/model to talk about them is what I'm trying to do at the moment. Here's some in-progress thoughts. I'll call the view that gives prominence to objects over algorithms the Objectual View (and the alternative view the Algorithmic View).
Characteristic of the Objectual View, things happen as a result of a directly perceivable cause-and-effect, where we can see that an object or a group of objects is causing some thing to happen. Typically, the cause follows directly after the effect - that's why we can directly perceive it. The Objectual View seems to be the way we're designed to see the world; we have to learn to see it through the Algorithmic View.
In the Algorithmic View, things still happen as a direct result of cause-and-effect, but the chains of cause-and-effect are less visible. I haven't yet nailed a clear picture of what's going on, but I can describe some of the cause-and-effect relationships that might be present in these cases. Unfortunately, I'll have to leave an explanation of these relationships for a later point in time. These are cause-and-effects that cross over levels of organisation, effects that accumulate through some chain of interactions, effects that run, or seem to run, contra to the "purpose" of an object (in the case of entities with intentions, these are often 'ecnomic' in nature). Often the effects will be temporally distributed from the causes.
When considering the Algorithmic View against the Objectual View, I think I can, as a first cut, distill it down to the following. There are various rules of cause-and-effect working at various levels of reality. Rather than cause and effect being two points connected by the arrow of causality, we could visualise it as a tree, with causes being leaves, combined causes being branch points, and the ultimate effect being the trunk. Rather than connected points of cause and effect spanning a distinct chunk of time, the tree, from leaves to trunk, may span physical distance, time and lie across the workings of unrelated proceses, and it can do so, and still result in the ultimate effect, as long as that distance, time, and those unrelated processes, are not enough to throw out the algorithm and stop it from causing the effect. That is, the algorithm is not 'implemented' by a single thing, and can be very dispersed (that is, by interactions that occur here and there, with all sorts of things happening inbetween). What we have touble appreciating, is that they doesn't need to be any co-operation, any working towards the effect.
Another way of thinking about this is that there are various ways that Algorithms can be implemented by lower-level entities, and these range across a spectrum from 'most direct' (two end-points connected by cause-and-effect') all they way to very 'dispersed' implementations. As we move across this spectrum towards the more dispersed implementations, we are loosening the constraints on how directly the algorithm is being implemented.
It's clear that this kind of thinking has been around for a while. Emergent systems, emergent laws, systems-based thinking all share this type of view. It crops up in places such as evolutionary theory, economics and no doubt many more! What I'm wondering, though, is how strongly it has been reduced to its essential nature and how forcefully (that is, directly and explicitly) that nature has been stated?
Oh, a part of the picture I've missed out in the above description is how these algorithms can exist in the first place. That's definitely part of what makes it hard to see things in this way. I'll try talking about that sometime soon.