Just learned about this. A long read, but really interesting.
he does love the cellular automata.
He kind of invented it. He did write the major book in the field
He absolutely did not invent CA. His book was published well after it had become an established modeling technique. Conway’s game of life was published in 1970. Complexity theory modeling had moved well past CA by the time Wolfram’s book came out, to the point that most of us didn’t know why he bothered writing it, much less thought it would revolutionize science.
I’ve been puzzled about the seemingly quiet reception Wolfram’s idea seem to receive from the serious scientific community. He doesn’t seem like a huxter and to my uneducated ears it sounds like a plausible set of ideas. Like other people are saying it’s all a bit over my head.
But like, no one talks about it. Is it just blase or not particular relevant even if true, or just less useful than he seems to think?
I can give you my impression and that of the people I spoke to about it. I’m coming from the perspective of a theoretical biologist who was heavily involved with computational models of complex systems - particularly ones with biological foundations. I worked with simulations ranging from molecular cell biology up to ecosystems.
I don’t want this to sound dismissive, but CA are cartoonishly simple versions of complex systems. Once you get past illustrating the idea that simple rules can give rise to complex behaviors, that they’re Turing complete, and that there are neat and interesting phenomena that can arise, I think you’re pretty much done. They’re not going to show you anything about the evolutionary dynamics that drive carcinogenesis. They’re not going to let you explore the chemistry that might have given rise to the origin of life. They’re not going to let you model how opinions and behaviors cascade on social networks.
Topics like emergence are core to complexity theory, but CA can only illustrate that it exists, and it does so in such an abstract way that it doesn’t really translate into an understanding of how emergence is grounded in real world systems.
Wolfram’s problem, in my opinion, is that he was largely disconnected from the complex adaptive systems community, and for some reason didn’t realize we had largely moved on. I don’t know anyone in the CAS community that thought his work was groundbreaking.
I do have to say that Robert Sapolsky seems to have found his work interesting, and I am very deeply interested in Sapolsky’s work. But he’s a neurobiologist, not a complexity scientist, and he doesn’t draw a concrete connection between Wolfram’s work and his own, other than the generic connection that complex systems can arise from simple rules. That’s something we’ve known since Conway and Lorenz.
My mind is open to counter arguments, but that was my impression and as far as I could tell, the same was true of my colleagues. I think that the general academic reception to his book bears this analysis out. It’s like if someone wrote a comprehensive book about all kinds of Prisoner’s Dilemma models long after we’ve moved on from PD to modeling more complex and accurate depictions of cooperative versus competing interactions. Students should absolutely study PD, and they should study CA. It’s just not something you want to hang your academic hat on at this point.
Mathematica, on the other hand, is pretty neat.
That’s some great perspective, thanks for the detailed reply!
The idea that simple rules lead to incredibly complex behavior is not new and wolfram was not the first mathematician to realize it even if he wants to pretend he is.
I’m big into fractal geometry so I would say Mandelbrot was the first 20th century mathematician to really get that concept along side the founding chaos theorist but even their realizations about the fractal nature of the universe ride off shoulders of others work.
I guess for computational mathematicians, the fractal nature of the universe and the simple rules making complex systems is new and mind blowing, for me and everyone whos read through deep simplicity its old ideas recycled into a new branch of mathematics reworded in terms a computer scientist thinks in.
Its not that the ideas aren’t important or mind blowing, they just aren’t new and the academic community has had decades to digest them.
And before the academic community came along many eastern religions came to similar concluions about the nature of the universe a couple thousand years earlier thanks to some good old fashioned shrooms and pot, so these general ideas have also been floating around forever and academics weren’t even the first to suggest them only the first to kind of prove them right with logic and experimental evidence.
That’s some great perspective, thanks for the detailed reply!
It does seem like one difference here, then, is that the theory is now in the hands of people with tools and knowledge to scale it much bigger and longer than anyone previously could. This could lead to it finding frontiers instead of just new people discovering an old thing.
Kinda like how neural nets have existed since like the 80s but only more recently have we had the computational resources to actually make them do something more interesting than fit to a complex math equation.
Or 3D graphics, where the math existed long before computers could render them, and then eventually they could render images, then later it resulted in Doom, and 3D animation, and things have exploded in that space since then.
Or how the first computer programmer existed long before the first computer but programming didn’t really take off until well after the computer existed.
I don’t know if Wolfram has something groundbreaking here. Maybe he has, maybe he’s wasting time. But from reading that paper, it’s clear that if this is something, it’s something that requires a scale that isn’t realistic for humans to explore on their own without tools to automate it. It’s possible that it requires a scale that even computers today can’t get close to, maybe they never will. Like, for example, if they do find the rules and try to run a simulation smaller than a galaxy, would stars even show up at that scale? Planets? Black holes? Having a galaxy the size of ours and others we see might depend on having a universe as big as ours, otherwise it doesn’t have enough variation to result in structures this large, which then might make it look like the real rules aren’t what we are looking for.
When the only tool you have is a deep lifelong understanding of computational language, everything starts to look like a hypergraph.
I was simulation-pilled in college, so that adds up.
He explained this on a podcast with Lex Fridman and broke my brain a little. I’ve forgotten his explanation of it since though. Joscha Bach has an equally interesting theory about how human mind works.
Got a link?
I’m not sure which one you mean.
Stephen Wolfram: Complexity and the Fabric of Reality | Lex Fridman Podcast #234 Onwards from the 23 minute mark
Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101
Unfortunetely one can just upvote once. Two absolutely fascinating episodes!
I wish I were smart enough and/or educated enough beyond my undergrad Physics classes from decades ago to actually understand this. I get the big picture stuff, then as it gets into the details there’s always a point where I realize I’m decoding words, but I have no idea what the hell they’re writing about.
If he was smart and fully understood his argument, he could explain it to anyone in terms they understood.
A brief history of time didn’t come out until 1988. You implying that the ability to break it down, Barney style, is necessary to be smart or to fully understand something?
It sounds really interesting - however, I can’t help but wonder that this doesn’t necessarily come to anything deeper, that this kind of theory where any of these rules can be arbitrarily chosen and ultimately comes down to some kind of Turing-complete automata just uses the “computational possibilities” of this theory to arbitrarily arrive at the physics we know.
Maybe that wasn’t super coherent, let me explain. Like this theory basically posits the universe as a kind of machine that is just executing these rules over and over. But we don’t know what those rules are. But I feel like the inherent “computational power” of these theories just let them come up with the laws of physics through sheer brute force.
I mean, if there’s essentially an infinite amount of possible rules, then of course some rules exist that would fully explain our known physics. That doesn’t mean that those are the rules by which the universe works, it just means that you have found some kind of “compressed” version of physics in a ruleset. I’m not sure this is really any deeper insight.
Also, isn’t there a problem in that this theory doesn’t seem to be able to be used for predictions? From what I understand, it seems that there is essentially no way to “simulate” the propagation of the rules without actually executing the rules, essentially. Like you can’t just simulate a ball falling, you need to actually have a ball and let it fall in order to see what happens. But maybe I’m not understanding it correctly.
Finding “compressed” physics, as you put it, is what we’ve been doing since the enlightmenment. Thermodynamics is a “compressed” version of quantum mechanics, for example. Finding these basic theories can only help us in understanding the universe.