Some of the ideas in this post might pan out, some might not. Regardless, I do think token event graphs will turn out to be important. Of course, I'm biased: I coined the name TEG -- although the underlying idea originated with Max Piskunov and his "local multiway systems" [0]
What's promising about TEGs (and their incidence hypergraph, the rewrite hypergraph) is that they offer a clean methodology to decompose the behavior of a non-deterministic automaton into its causally independent parts. We're still trying to understand how to think about them, but the most promising approach seems to use the lens of (modular) representation theory, which gives us a rich mathematical toolkit to work with.
If this methodology works, there will be possibility to represent many kinds of systems in disparate fields, ranging from distributed computation to physics to biology to machine learning, in the common language of TEGs and their representations. Of course it may turn out to be merely a recasting of older ideas. In particular the Krohn-Rhodes theorem [1], categorical Petri nets [2], and the GNS construction [3] seem like they might be describing the same or an analogous procedure.
I hope to soon be describing this approach in full detail using quiver geometry [4].
Do you know, whether anyone of the approaches (yours or one of that you mention in the links) has had any kind of practical success, or maybe has been adopted by some community?
I mean, it's nice to generalize, but did it help anyone yet? like practically (e. g., did epidemiologists or biologists or math. modellers adopted any of the approaches?)
I'm not an expert in Petri nets, but they are definitely used for modeling in various fields. Krohn-Rhodes theory also claims some successes if you read that article.
As for the stuff I'm actively working on: even if it is successful, and uniquely suited to solve some particular problem, I'd imagine it would take at least a few years to be actively applied in the right places -- and it might end up being me helping to apply it!
If you look back at for example graph theory, it was explored on the pure mathematics and computer science side for many decades before it spawned e.g. network science and got applied in sociology, economics, etc.
So, yeah, don't hold your breath! If you'd prefer not to read any blog posts heralding XYZ as the next big thing before XYZ has already led to a concrete breakthrough, I think that's a totally fair. I myself am quite happy to work quietly on things until there is a satisfying and complete application, but Stephen isn't like that. Both stances have pros and cons.
Petri nets are interesting, and used e.g. in biology to model some cell regulatory processes.
However, it's really hard to infer properties from them in a static manner using model checking or abstract interpretation.
How are we going to infer properties about petri nets or the models Wolfram is proposing? Geometry or differential equations have been so successful because they are amenable to algebraic and analytic approaches.
Aren't these multiway systems essentially what we see with ambiguous grammars that yield parse forests rather than parse trees? The "forest" here is a forked parallel computation started upon encountering any ambiguous rewrite.
One application of these might be to resolve the meaning of probabilities under the many-worlds interpretation of quantum mechanics. When an interaction takes place, a number of worlds are created for each possible outcome, but it was never clear how to formally derive the probability of finding yourself in any given world (the Born rule).
Under the section titled "Observers, Reference Frames and Emergent Laws" in this article, you can see some branches merging again in that first graph, so perhaps the probabilities of the Born rule are due to parallel computations that merge in this way, ie. the probability you'd find yourself in the BBBB world rather than the AA world at step 3 of the computation is 2/3 vs. 1/3, respectively. If rules generate recurring patterns as shown there, these might show up as stable probabilities in aggregate.
> Aren't these multiway systems essentially what we see with ambiguous grammars...
Yes.
> One application of these might be to resolve the meaning of probabilities under the many-worlds interpretation...
There's some ways to go before we Hilbert spaces, operator algebras, and the like, but yes, the idea would be that some path counting procedure would be used to derive the Born rule. It would be great to bridge with Carroll's self-locating uncertainty paper [0]
> Under the section titled "Observers, Reference Frames and Emergent Laws" in this article, you can see some branches merging again in that first graph, so perhaps the probabilities of the Born rule are due to parallel computations that merge in this way...
All of this business about multiway computation and even “rulial space” seems to have been anticipated by Toffoli to some extent in “Action, or the Fungibility of Computation” [0].
The brilliant point of this paper is to point out that in the same way entropy describes our ignorance of the microscopic state of a system, the action (the time integral of the Lagrangian [itself equal to kinetic energy – potential energy]) seems to quantify our ignorance of the microscopic law that governs a system.
It’s not made explicit, but as a lattice gauge theorist it’s an easy analogy to make: the gauge configurations that contribute correspond to different Dirac operators (ie. PDEs) for matter—-gauge symmetry is in some sense a “rulial space” but… you know… discovered 100 years ago.
Thanks for the link! I'd be curious if you know about other information-theoretic approaches to understanding phenomena even in say classical mechanics (let's put aside statistical mechanics where the connections are already well understood). I'm dimly aware that there are fields like "symbolic dynamics" but not sure of the best entry point into that literature, or where to find the most powerful perspectives it offers without getting stuck in the weeds.
Not sure I have anything concrete to say, except that the “try all rules” rulial space seems to be an interesting way to make Tegmark’s mathematical universe hypothesis [or at least the more limited computable version] into a concrete thing. But I have to admit that when I read Toffili’s paper I found it amazing, exciting, and otherwise hard to see how to advance. I reread it every few years and get something new each time.
You know, it's bizarre, I recently pursued a similar approach, attempting to derive quadratic kinetic energy of an automaton controlled by a computer program (it used a circular tape to decide whether to move left or right). I set up a coarse-groaning, identifying energy with information, well, negentropy of the state of the tape, and arrived at similar results: higher energy = fewer microstates, in other words, there are fewer programs that move at the speed of light than at some slower speed. Which makes sense! There is only one tape that tells the automaton to move right at every step. Toffoli says exactly this: "A low-energy state is 'cheap' because it is 'common', there are so many ways to achieve it". Conservation of energy is exactly microscopic reversibility! What could be cleaner?
That whole approach is on the shelf for now, because I first want to develop a discrete version of contact, and thence symplectic geometry, since that will provide the mathematical formalism to express a kind of "discrete classical physics", rather than just a single toy model.
I believe that he is very bright. Do you think it is possible that he has also gotten so wrapped up in his own thoughts that he can no longer distinguish "good" theory from "bad" theory? It happens to a lot of mathematicians. Michael Atiyah a while back claimed to prove the Riemann hypothesis. Even Hamilton got so wrapped up in quaternions that he spent the rest of his days obsessing over them (they are useful, especially in graphics, but did not revolutionize physics the way he thought they would).
I glance at his writings over the past few years and most often think that unless he uses his theory to solve a problem others care about, his line of research will be abandoned after he passes.
The book is under-whelming, however. There might be something worth investigating, although the basic mapping between finite automata and QFT is not made explicit, nor even how it could be. One type of maths is pretty firmly embedded in continuous maths on manifolds and the other is inherently discrete. And one has probability density and is only deterministic in the calculation of the probability densities, while the other is apparently completely deterministic. And how can non-local entanglement occur with the automata model? That said, there are some interesting hints in modern physics that information is related to some very fundamental processes, e.g. the entropy of black holes being related to information held by the black hole, etc. But he (in my opinion, a mathematically advanced amateur, and no master of QFT) has too much excitement for his own idea and too much hand-waving over the details.
I did buy and read most of it, as I also have bought many of Penrose's popular books, so perhaps I'm just a disgruntled purchaser.
I largely agree about the book: I think there might be something there, but the case is far from being made. One small correction: the rewriting systems he proposes are not deterministic.
As for how non-local entanglement can occur, I'll just offer my current speculation, having spent about a year working on this kind of approach. You'll have to give up SR in its traditional sense. There will be a preferred reference frame, which is the rule application order that the automata uses in any particular history. The challenge becomes to explain why it is not observable and you have approximate Lorentz covariance. I think this will be less hard than one imagines -- even in Feynman's lectures you see explanations of SR that involve an aether (or a global frame if you prefer) and clocks defined by light bouncing between mirrors moving in this aether. Of course all physical laws have to be covariant, and explaining how this happens requires you to know how the physical laws are "microphysically". But the graph itself in graph-automata models is a good candidate for an ether, with particles being e.g. topological defects. Somehow covariance must reflect how a dynamical account of defect behavior changes as one foliates rule application order, mysterious but not inconceivable.
Now, entanglement: one imagines entanglement is implemented by long-range connections, which in graph-type models could take many forms. This is a kind of discrete version of the "ER = EPR" proposal. But they will have to be such a limited form of connection that they do not permit signaling, and I think they way they can do this is via some sort of knot-theoretic braiding. Only be comparing measurement outcomes classically will it be possible to deduce the way the braiding was effected by measurement and confirm you had e.g. a GHZ state.
Now, QM is more than just entanglement, but in the words of Jaynes: "QM is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature - all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble." When scrambled, all the ingredients look inextricably connected. I think the unscrambling will seem beyond hope until one has the exact recipe to recreate the omelette.
That doesn't mean what he's saying has any value. I'm not saying what he's written here doesn't, but the prodigiously smart are just as capable of being intellectually lazy as anyone else. Anyone making grandiose claims like "our Physics Project [... is] showing us something even bigger and deeper: a whole fundamentally new paradigm for making models and in general for doing theoretical science" doesn't get to rest on their laurels if they want to be taken seriously.
Did he get any significant results with his new paradigm? What unsolved problem he was trying to solve in the first place when he developed this paradigm?
The significant result would be a multiway-graph that represents say the evolution of electron in a vacuum. He has not got this result yet. I think he started down this path to try to further physics by going beyond traditional analytical mathematics.
Seems like an application of the ladder of abstraction. [0] That is, abstracting over every possible value of a variable in a system.
In this case we abstract over time in a system of computation. To be more precise the system is a non-deterministic automata, so we abstract over time and all the possible branches too.
Mapping out all state of a non-deterministic automata, you eventually find overlaps of different branches. To Wolfram this is a big revelation since instead of the system getting more complicated you found some simplification, aka. generality.
Stephen Wolfram seems to be the quintessential intellectual DIYer. He writes 20,000 words on a highly abstract and technical subject, and every single hyperlink in the article goes to either one of his own websites, or to Wikipedia.
I respect his ability to integrate a huge body of scientific knowledge in a single brain and then articulate it in books like A New Kind of Science, but I have to wonder whether his approach is designed to maximize scientific progress, or to maximize his personal reputation. His eagerness to claim credit for ideas that could hardly be credited to a single person is a bit of a warning sign.
>He writes 20,000 words on a highly abstract and technical subject, and every single hyperlink in the article goes to either one of his own websites, or to Wikipedia.
Exerpt:
"But then—basically starting in the early 1980s—there was a burst of progress based on a new idea (of which, yes, I seem to have ultimately been the primary initiator): the idea of using simple programs, rather than mathematical equations, as the basis for models of things in nature and elsewhere."
He also draws out everything more than it needs to be in this long winded narrative and explicit self-congradulatory form that makes me usually wait until someone else reputable with a computer or computational science background reads it and summarizes to see if it's worth suffering through the time to read it myself. Not only is it questionable (Conway and many others come to mind, would have to check dates to see origins but honestly who cares), it's just off-putting.
It sounds to me like he's claiming credit for ideas behind weather modeling used as much as 20 years earlier, and fractal curves that were being clearly talked about in the previous decade.
In your example quote, he only managed to shoehorn in his cellular automata work. The reader is left hanging -- for an entire sentence! -- as to whether or not he was a child prodigy. Is that growth? Or is his game slipping?
His posts are very hit or miss and the one linked in this thread is unfortunately a huge miss. He describes nondeterministic automata (and pretends he doesn't) and gives them way too much significance. Even the 'computational paradigm' he's saying it replaces is only a paradigm in Wolfram's mind
The 'applications' section at the end is just ramblings at best and borderline crankpotty at worst. Which is weird since he is undeniably smart and educated, maybe an overflow?
I was so excited by the headline... and then I was so bummed by the domain name that I now doubt myself for being excited by the headline. I clicked.. there's still no danger of confusing this guy with a scientist or an academic. If somebody is real smart, but also real self serving, it kinda cancels out their smartness. He's not the worst, but still.. it's like if a paperclip maximizing super AI was trying to convince you to let it out of its box. He's not gonna teach you anything.
It would be cool if there could be a dedicated forum to discussing the hypothesis “Stephen Wolfram has a large ego” so these threads could be about 10% of their usual size, and much higher signal.
Define science. He is certainly doing experiments and is very public about it. Better than what is going on in academia with the reproducibility crisis and the fear of publishing ideas that don’t pan out.
No it's not better than what is going on in academia. It's like the extreme version of all the problems of academia rolled into one: Hyperbolic storytelling, massive vague claims, not citing anyone but yourself, and nothing backed up by evidence matching the claims.
I spent way too much time looking at the Physics Project when it came out, there is just nothing of substance there. And no, he is not doing experiments that back up his claims, he's just running some random simulations and waving his hands at the pictures that come out.
One key aspect of the scientific method is to acknowledge that it's very easy to fool yourself into thinking you have understood something "up to ironing out some details". Every crackpot is convinced of this. And that's why science operates on the foundation of convincing others of the understanding we have reached. Not any random others, but other people that have invested the time to master what is already known, and are making a good faith effort to understand. I am not talking about peer review, I am talking about the long conversation that unfolds over many many publication, conference talks, heated seminar discussions, etc.
Nothing Wolfram has done/contributed to in the last 30 years that has convinced anyone of note. Deep thinkers in all branches of science that he touches upon have unanimously found his output to be a vanity project of no scientific value. That is not to say that there are no interesting and valuable ideas in the texts he produces, it's simply that these ideas are already known, not his, and he does not acknowledge or tackle the problems that they have. In turn he has not made a good faith effort to engage with the work of others that is pertinent to his claims (usually because this would show that his claims are vastly exagerated and known to be problematic).
He good at one thing: Selling himself as a genius outsider to people not willing or qualified to come to their own conclusions on his work.
(I rather enjoyed this review of ANKS which goes into some detail for a number of the points I make here: http://www.bactra.org/reviews/wolfram/ I personally looked quite deeply into the "papers" produced at the time of his "Physics Project", by Gorard. Some of these are things I have studied very deeply in the past and feel qualified to judge. There simply is _nothing_ there.)
I'm unsure if you're disagreeing or agreeing with me because I agree with the piece. And Sabine isn't saying that top physics isn't falsifiable, she's saying that it is, but that this is not enough. Which is true. Falsifiability is not sufficient, but it is still a necessary requirement. If your physical theory cannot even be tested against the world, it is not physics. If it can, that still might not make it relevant.
Wolfram's theory though is worse in that it doesn't appear to be even testable, doesn't build on things we know describe the world well,
What's promising about TEGs (and their incidence hypergraph, the rewrite hypergraph) is that they offer a clean methodology to decompose the behavior of a non-deterministic automaton into its causally independent parts. We're still trying to understand how to think about them, but the most promising approach seems to use the lens of (modular) representation theory, which gives us a rich mathematical toolkit to work with.
If this methodology works, there will be possibility to represent many kinds of systems in disparate fields, ranging from distributed computation to physics to biology to machine learning, in the common language of TEGs and their representations. Of course it may turn out to be merely a recasting of older ideas. In particular the Krohn-Rhodes theorem [1], categorical Petri nets [2], and the GNS construction [3] seem like they might be describing the same or an analogous procedure.
I hope to soon be describing this approach in full detail using quiver geometry [4].
[0]: https://github.com/maxitg/SetReplace/blob/master/Research/Lo...
[1]: https://www.wikiwand.com/en/Krohn–Rhodes_theory
[2]: https://arxiv.org/abs/2101.04238
[3]: https://www.youtube.com/watch?v=OmaSAG4J6nw
[4]: https://quivergeometry.net