For example, I love how the human brain consists of layers from different evolutionary phases (like the mammalian and reptilian brains), which reminds me of seeing remnants of teletype code in modern macOS.
FYI: The idea of a “lizard brain” is an outdated concept in neurology and evolution. The structure of modern brains, even in regions shared by mammals and reptiles, have novel and ancestral neurons showing that these regions haven’t remained static in reptiles, much less mammals.
Did not know that. But there’s still gotta be some legacy code that is not really needed for humans somewhere. Perhaps some stuff in our genes that’s just there and no one really knows what good it does, like some obscure dependency in conputer code
It’s not your fault. Pop psychology refuses to die. There is plenty of crap in good standing among scientists that’s absolute garbage.
The whole branch of evolutionary psychology is so subjective and immune from experiment it’s a bastion for cranks, misogynists, and scientific racists. And speaking of Scientific racism: IQ does not measure intelligence.
I am a pot smoker. My perfect vacation consists of a model, a view, and a controller.
This is so funny that I’m angry because I can never use it at work! Ah well, it’ll just have to torment my peers at hack(er’s drinking) night!
I love finding similarities like these, and the one you mentioned about teletype is a new but really cool interpretation to me. Though I tend to view things more mechanically than naturally; I love playing factory games like Factorio and Satisfactory. I guess the natural metaphor I use most is the human body, which is a really complex system with lots of interacting subsystems. I forgot the name of the book & author but a medical doctor wrote a book on complex systems and said that any sufficiently complex system, like the human body, is always dealing with some degree of failure, so any artificial system needs to be fault-tolerant at many levels to continue functioning properly.
Basic data structures are so primitive as to be emergent. Obviously we see stacks of things and queues everyday, but how often do you look at a highway and see a circuit, or plumbing and see a DAG?
Conway’s Law too! If you didn’t already know about it, you’ll see it everywhere now. I think it defines reality. A system cannot be designed that isn’t constrained by the communication ability of it’s creator. All art is the limit of what we can express.
That is really feckin interesting.
And
All art is the limit of what we can express.
Can you elaborate on that bit?
Not exactly about codebases, but I believe the universe operates like a cellular automaton (CA) at its most fundamental level. (Idea originally from Stephen Wolfram.)
A CA, if you don’t know, is a simulation in which a cell in a grid evaluates nearby cells and returns a value based on what it finds and the rules given to it. What happens next is called ‘emergent behavior’, and in some ways mimics physics and even primitive life. In fact, many physics models use CA already.
What this means to me is that there is ultimately only one type of energy/matter, and that everything we can detect (quarks, photons, atoms, etc.) is made from the same ‘stuff’, and that nothing is truly random… it’s just that we lack the tools and models to predict what happens below the smallest observable scale.
Have you seen Lenia?
My understanding of what a cellular automata could be was greatly expanded when I learned about it. To me, something like loop quantum gravity seems to have the same rough “shape” as a very complex cellular automata. I think we’re (humanity) getting closer to a breakthrough in understanding on that front.
I didn’t know it was called Lenia, but I have seen implementations of it here and there. It’s pretty cool that people have gotten it working, and I wonder what other totalistic cellular automata besides the Game of Life would look like when evaluated in a continuous manner.
The problem with making truly continuous CA is that we are using digital computers to do it, so we can only ever get an approximation of what a CCA would look like. I’ve seen just how big a difference 8 bit vs 16 bit values make, and I imagine that even though higher and higher bit depths would converge upon a truer model, issues will still persist. Plus, we are still stuck with using grids…
I see Systems Engineering analogies in a lot of complex natural systems. It’s a great model to understand how the world around you works, as long as you remember it’s only a model.
For example, I optimize my navigation around town sort of like the OSPF network routing protocol. I consider the speed limit & number of lanes to be analogous to the link cost, traffic lights as Layer 3 hops, and stop signs as Layer 2 hops. I consider the local highways to be my “backbone area” so navigation is optimized to find the shortest path from wherever I am to the nearest major highway. Sometimes the solution takes me a mile or two out of my way, but I’ll avoid 4 or 5 busy lights by taking a back road or cutting through a residential block.
In fact, the airline network is similarly structured: for a given carrier, routes among their hubs are their backbone area, and routes between regional airports in different regions connect through one or two hubs. As a traveler between two regional airports, you’re likely to fly to the hub closest to your destination and meet a second leg back out the the other airport. All to better if you just live near a hub.
For example, I optimize my navigation around town sort of like the OSPF network routing protocol.
Slime mold has been shown to be an excellent way to plan rail systems:
https://www.wired.com/2010/01/slime-mold-grows-network-just-like-tokyo-rail-system/
I’ve heard the phenomenon you’re describing as the “lava layers”.
Genetics is like that.