YTread Logo
YTread Logo

Computation and the Fundamental Theory of Physics - with Stephen Wolfram

May 31, 2021
I am very happy to be here at least virtually and I am realizing that I think the last time I attended a talk at the Royal Institution was in 1975, when I was 15, 45 years ago, and I remember the talk was about liquids. The crystals are probably a good sign, but one of the things I was thinking about is what has changed in the world in the last 45 years and I think, on an intellectual level, the most important change we have seen has been kind . of the rise of computing as a practical technological thing and as an intellectual thing and a way of thinking about the world and I've been lucky enough to be involved in a lot of things that have happened both on the technology side and from the intellectual side in kind of computing boom, one of the things I had been thinking about for a long time is whether this kind of

computation

al paradigm that we've been building could be useful for understanding some kind of pinnacle scientific problem. of how the universe is

fundamental

ly constructed, what the

fundamental

theory

of

physics

is and I'm very excited that over the last year we've made spectacular progress in that progress that I didn't think was necessarily going to happen for 100 years.
computation and the fundamental theory of physics   with stephen wolfram
It means that we have been helped a lot by the fact that 20th century

physics

was so successful with the general

theory

of relativity, giving us ideas about gravitation, quantum field theory, giving us ideas about particle physics, etc., taking advantage of those two approaches and now injecting this

computation

al paradigm. I have managed to make spectacular progress and in fact I am very confident that, when it comes to finding the fundamental theory of physics, if we think of it as a kind of climbing, perhaps one of the highest mountains in science, I still I haven't made it to the top of the mountain, but I'm pretty sure we're on the right mountain now and it's pretty exciting to see the view from where we are so far away and I want to talk about that as a group today, but I thought I'd start by talking a little bit. about this idea of ​​computing and maybe I can tell you a little bit of my personal story and how I came to be involved in these kinds of things.
computation and the fundamental theory of physics   with stephen wolfram

More Interesting Facts About,

computation and the fundamental theory of physics with stephen wolfram...

The last time I was listening to that talk at the Royal Institution back in 1975 I was, well, American, a high school kid um uh and um um at school in England and um I had gotten really interested in physics. I guess I was probably right about the time that I had discovered that yes, a child could do research too and was starting to do things like publish papers on physics and so on, but there was this really kind of thing about him. from stopping me from doing physics, which was all this was, as it seemed like a very mechanical mathematical calculation that I wasn't particularly good at and which I didn't find very interesting, but which I almost discovered at the time.
computation and the fundamental theory of physics   with stephen wolfram
These things called computers that I thought could be made to do a lot of those calculus things that I didn't want to do in physics, so I started using computers as a tool at first to do physics. and uh uh it evolved quickly to the point where that was a key part of being able to progress in physics, but then a few years later I became a physicist, I was involved in particle physics and cosmology and things in the late the 1970s and that was kind of a golden age and that kind of thing, so it's funny that the things that you did as a teenager still matter to people today because you happened to do them at a time when a field was in kind of rapid growth, but I got much more deeply involved in understanding how things could be done computationally and understanding what the fundamentals of computing were and then I started thinking about well, I'm trying to progress and understand how the world works and how nature works and so on and I started thinking about how can I make models of all these complex phenomena that we see in nature, whether they are biological phenomena, whether they are complex fluid flows, whether they are all these types of things , etc.
computation and the fundamental theory of physics   with stephen wolfram
My first instinct was good, I'm going to go back to those kind of mathematical physics methods that I learned doing things like particle physics and so on, but it turns out that they didn't work very well to understand how. to uh uh how to model these more complex phenomena in nature and then I started thinking, well, we have this idea of ​​making models of things that have formal models where we can write some kind of abstract system and say this is it's going to be a representation of how the world works what kind of abstract system can we use to represent how the world works well we have been using for the last 300 years mathematical equations which was Galileo Newton's kind of idea was let's roll in mathematics and a particular calculus as a way to describe the world, let's write differential equations that describe, you know, the movement of the earth around the sun or whatever, but I got to thinking, can we generalize this idea of? making abstract models of things to go beyond the details of mathematical equations and at that time this kind of computing paradigm was something that I knew something about and I started to think well, what if we used more general rules for things ? represented as, say, simple programs rather than rules that can necessarily be represented as mathematical equations, so that got me into a sort of basic science question: OK, if the world is described by programs, what those programs might be? and, in particular?
We don't imagine perhaps that the world could be described by programs that are big, long, complicated programs of the kind that we write for practical purposes, so to speak, in doing the work that we do, they could just be programs chosen at random. maybe very simple programs chosen at random, they could be things that run in some way, so the question arises, well, what do those simple programs actually do? So, that was something that one could try and do some sort of analysis and try to think about. It's um and uh, but it's also easy to do an experiment, in this case it's not a physical experiment of the kind that the royal institution is famous for, it's more of a computer experiment, it's much easier for someone like me to come up with. he. work, so let me show you a computer experiment that I first did in the '80s, early '80s, but although it took me a little longer to realize its significance, I was interested in looking at the simplest. possible programs and these programs uh, they are things that are called cellular automata and they only have one line of white and black cells, so only one line of cells each is white or black and then there is a particular rule, let me, I can do this live here, let's just do one of these um uh, let's say this one, okay, so this is a rule for one of these cellular automata, so a line of black and white cells and this rule says if you see a particular place in that line where there is a white cell and a black silence to the right and a white cell to its left, then make the middle cell black in the next step, okay, a very simple rule just tells what to do in each of those eight cases, okay, let's try to run this. and let's actually see what it does um, so let's go ahead and um uh and do that, so let's start from let's say a black cell at the top, let's make a mesh there so it's easier to see, okay, here we go. this is um actually let me maximize this window um so this is um uh this is what happens according to this particular rule that we started with, let's choose this this cell here to be white white black and let's go and look at the rule here and it says white white black and that should make black in the next step and that's what happens here, so a very simple rule uh started from a black cell, very simple behavior that we get, we could say, okay, it's something like this like what we do.
We would expect to put in a simple rule, we get it, it does something very simple, okay, let's try a different rule, let's say this rule here, let's try to run it, okay, a little more complicated, we've changed the rule a little bit here and we've come out in Instead of just this uniform black pattern, we've got this checkerboard out, okay, let's try a different rule here, let's change a few more bits here, um, let's try this rule here, uh, okay, okay, now it's doing something a bit. more complicated so again this very simple rule this is what it does, so let's try, let's try to run this, more steps, let's say 300 steps, let's remove the mesh so we can see the thing, um, okay, so what it did was that had this. very simple rule, but it created this very intricate pattern, but it's a very recognizable pattern, it's a nested fractal pattern, um, and we could say that's okay, if the rule is simple, even if the pattern is not uniform, for example, there must still be something very easy to recognize the regularity of the pattern, well, now you can ask the question, let's look at all the possible rules of this type, come on, you know, in a sense, we can think of it as a kind of computational universe of possible programs that exist and we can turn our telescope towards this computational universe and see what's out there, so 256 possible programs of this type let's go ahead and write a little program that simply asks us to show us a picture of what each of them those programs work fine, so let's do this, let's see, okay, so we're just going to say this okay, so let's say 0 to 255, okay, this will now work for every possible rule of this type, which happens, so come on. starting from a black cell here and we're looking at what it does, sometimes what it does is actually quite simple, there's a black cell that just runs to the left and these alternate black and white. doing pretty simple things we keep going we keep going we're like looking into this computational universe of possibilities okay, we see a nested pattern there okay that's cool let's keep going let's keep going okay this is my favorite science of all time discovery so this is if we count these rules we can number them this is rule 30 which comes just from the binary decomposition of the number 30 represents the bit pattern that corresponds to this program okay, there is rule 30. so this, let's take a look at what happens with rule 30, it is okay, so let's go back here, let's say what the rule is for rule 30, okay, here's the rule, it's another one of these very simple things, let's try to run this rule and see what happens. okay, this is what it does, it's not that incredibly strange, we start from just a black cell at the top, maybe I can display it in a little less steps so you can see what's happening here, which is only 50 .steps we start from a black cell at the top and just follow the simple rule over and over again and see how complicated the result is and here I will run it for 500 steps just to see what happens. say, oh, if you run it long enough, it will resolve and it will be something simple, it will be obvious that it comes from something simple, well, yes, there is a certain regularity that you can see on the left, but most of this pattern just seems really complicated, really kind of random, in fact if you look at the middle column of cells here it's random for all practical purposes, in fact we've used it in the Wolfram Language, we've used it as our source of pseudorandomness for 30 years. and um, it's a little bit, you know what's going on here, how do we understand what's going on? a bit like the digits of pi, you know there is a procedure to generate the digits of the heights, the relationship between the circumference and the diameter of a circle, but once you generate the digits 3.14159, etcetera, etcetera, etcetera, that sequence of digits, once generated, it seems completely random to us and it is the same thing that happens here, we have a defined rule to generate this pattern, but once the pattern is generated, it seems to us in many ways completely random, what we did not know is that in this computational universe of possible programs this is such a common thing to happen, it's very common to have a situation where even though the rule is very simple, the behavior you get is incredibly complicated, it's kind of a shocking discovery.
In a sense, you might have thought that if you wanted to do something complicated if you just discovered something in the natural world that looked like this, God would say, that must have a complicated origin, but no, that intuition is wrong, that intuition is the right thing to do. that we get from our traditional experience with engineering, etc., you know, if we want to do something really complicated, we have to try really hard. You have to know, put a lot of engineering effort into it, but what this says is that in the computational universe that's not how things work.
You can have this very simple rule, but you can get this incredibly complicated behavior and I think this. It's a really incredibly important fundamental fact, and in fact, over the last 35 years or so it's been emerging that this fact affects all kinds of different things and explains a lot of mysteries that we've had in science, I think probably. The greatest kind of meta-mystery is when we look at nature, what nature seems to be doing, creating all these complex shapes, all these kindscomplex things, how do you do it? What is the secret that nature has that we as humans and in our engineering and so on have not tended to get right?
The secret is that in the computational universe even very simple programs can show a very complicated behavior that is just a kind of fundamental abstract fact that nature uses when it produces complex things, but when we do engineering we don't tend to do this because our engineering tradition has been that we are building an engineering system that we want to foresee. What are we going to build, a system to do this particular thing, we want to set it up so that we can foresee exactly what is going to happen, so we don't want something where we just have a simple rule and it makes all this complicated. things that we want, something that we can easily predict from our system configuration exactly what it's going to do, at least that's often what we want now, in fact, when we get involved as we look at different kinds of approaches to engineering, um that's changing a little bit, but let me talk a little bit about the kind of broader intellectual picture around what's going on in a case like this, I think it all ties back to this thing I call the computing principle. equivalence, which is a kind of very general principle that allows us to understand a lot of what's going on in the computational universe, so let me tell you a little story here, that is, when we think about what the system is doing.
We can think of it as doing a calculation, it's starting from its initial configuration of ones and zeros, blacks and whites, whatever and it's running this rule and it calculates what it does and we can think of this as if it's doing some kind of calculation. The question is how sophisticated a calculation is. Is it the case that when the rule is sufficiently simple the calculation must also be correspondingly simple or not? What this principle of computational equivalence says is if you look in the computational universe at all the possible programs, etc., then as soon as the behavior you see is not in some sense obviously simple, the calculation it corresponds to is as sophisticated as any calculation to be done, but it is surprising that you can think that when you have a program and you progressively make it more sophisticated. that the calculations that that program would be able to do the calculations that that system that machine, whatever you want to think about how it would be able to do, would also become correspondingly more complicated, but that's not the case, instead, what is this principle of computational equivalence tells us that as soon as we pass a very low threshold, everything has the maximum level of computational sophistication, so this is, uh, this has this idea, it has many consequences, one of them is something that we know since long ago.
For a long time now, what has already had a very important technological consequence for us is the idea of ​​universal computing. I mean, if we went back a little over 100 years and said how things are calculated, we have these mechanical calculators that we can buy. an adding machine you can buy something that has square roots you can buy a machine that does this or that one might have thought that each different type of calculation that one wanted to do would have to buy a different machine to be able to make it do that calculation, but it mainly arose in 1936 with the work of Alan Turing, but there was actually a precursor that is about to have his century-old invention of some things called combinators that weren't well understood at the time, but are actually kind of a precursor. to this idea, but what emerged most clearly in Alan Turing's work in 1936 was this notion of universal computation, so the idea is that you can have some sort of unique machine that, given the right capacity, initial conditions, By initially setting it up the right way, you can have it do any calculation you want, in other words, instead of having to buy a new physical piece of computer to do each different type of calculation, you can stick with a computer and just feed it with a different program and have it do different calculations and that idea has obviously been a critical idea for technology because it is the idea that makes software possible and it is the idea that has led to basically a large fraction of modern technology. this idea of ​​universal computing, the idea that you can have a fixed computer that can run all these different types of programs well, one of the things that this principle of computational equivalence says is the kind of question of how much effort does it take. you have to go to to make a universal computer how difficult is it to make a universal computer and you might think, based on our actual experience with computers, it must be very difficult, you know you have to build this microprocessor, maybe it has 100 million transistors. year.
It has a billion transistors, there are a lot of details to making this thing that can be a universal computer, but what this principle of computational equivalence says is that no, that is largely not the case in the computational universe, anything that is not By doing obviously simple things we are going to be able to do universal calculus and that includes things like this rule 30 cellular automaton, we don't know for sure that rule 30 does universal calculus, although it seems extremely likely that this particular case, rule 110, show what's happening. grow only on one side here, so let me show you, okay, that's what it does, let me show you a little bit more, this particular rule that we know is capable of doing universal calculus and, you know, let me do that's a little bit plausible. here because we can say, let's see, let's start with um, um, a random initial condition, let's say, let's take 2000 random cells there, okay, that's a little bit high resolution, let me go down here, let me do.
Well, it's starting with random initial conditions, so it's like a random program and what you see is you see all these little structures that you can imagine that are like bits one and zero hanging around on wires and a computer and they're interacting with each other. yes and they are doing logical operations and so on and with a lot of effort you can show that yes, in fact you can configure this so that it can actually do universal calculus and so this Kind of evidence of this principle of computational equivalence is that yes, even between These simplest cellular automata possible, there is already one that we know is capable of performing universal calculations.
In fact, a few years ago I had wondered about it. Alan Turing had worked on Turing machines and I was wondering what the simplest possible universal Turing machine was and it turns out that he had guessed that this particular one, which is the first Turing machine, is a representation of the Turing rule. . machine, the first one that um uh uh was not doing obviously simple things and I gave this prize and this young Englishman Alex Smith uh in a limited number of months actually managed to prove that yes, in fact, this particular Turing machine is universal.
It is capable of being programmed to do whatever you want and that is another type of evidence for this principle of computational equivalence. Well, one of the consequences of this principle of computational equivalence is universal computation. It's easy to find. You can make almost anything a computer if you know how to program it and that means you can, so let's go back to something like our Rule 30 and ask what kind of meaning does knowing what it is? a computer, uh, knowing that it can do some kind of arbitrary calculation what the meaning of that is, so one question is how do we predict what will work well rule 30.
One thing we can do is just run it and see what happens. 500 steps, we can run it for 500 steps and, um, see what it does, but we could say that what science is supposed to tell us is how to make predictions, how to move forward and not just have to execute each step, but how to say that We know what the answer will be, we don't have to execute every step, and in a sense, in the tradition of mathematical science, that's been something that we've been able to do routinely, you know, we want to know where the Earth in a million of years we don't have to track the actual motion of the Earth over a million years, we can just plug a number into a formula, at least in the idealized two-body problem, and say that this is where the Earth Will be that we won't have than just going through each of those steps so that, because of computational reduction, we can be smarter than the system itself and say yes, I know the answer, I know what it's going to do next.
Over a long period of time, that is the tradition we are accustomed to in the mathematical sciences. What we learn from the computational universe is that we can't do that. The principle of computational equivalence says you can't do that. because in a sense what happens is that if we are going to predict what this is going to do, we have to say that we, as predictors, somehow have to be smarter than the system itself, the system itself takes 500 steps to do it. , but we can move forward and we can do it in two steps. Well, the principle of computational equivalence says that that's not going to work because, in fact, both this system and our brains, our computers, our mathematics or whatever, are all equivalent in the level of sophistication. of calculations that they can do and, in a certain sense, forces there to be a kind of computational irreducibility in the behavior of the system.
If the system requires a certain amount of calculations to do what it does, then our brains will have to do a comparable amount. of calculation to discover what the system is going to do, that is why, in a certain sense, this system seems complex to us because, in a certain sense, we cannot predict, we cannot jump forward. and saying what you are going to do, this has all kinds of consequences if one is interested in things like the philosophy of free will, etc., and understanding you know what is predictable and what is not, etc., but it has many kinds of consequences like that if you want to ask a question like what will the system do in the end, well let's take an example of rule 110, let's ask the question if we run this for several steps, we can say what it will do.
We have all these little structures running around here, will these structures survive or will they eventually go extinct, how do we figure it out? more steps to see what is going to happen, well, we don't know how many steps are going to be taken to solve what is going to happen, so this is a problem in which if we say well what is going to happen in the end after an infinite number of steps, well, it could take us an infinite number of calculations to solve it, so we have to say that this question is something that is formally undecidable and, in reality, this is exactly the same story as Gödel's theorem in mathematics, it is kind of of concrete version of the effect of Girdle's theorem and the idea of ​​undecidability this particular case I think that if I remember it correctly in 3000 steps maybe we could have solved this particular case let's take a look uh yes, I think that after 3000 steps Let's see, this pattern in particular, nothing, it eventually goes extinct, but we couldn't have known how many steps we'd have to take for the thing to go extinct, so it's okay, um, uh, but, you know, knowing that this computational universe has a lot of them. kind of rich capabilities, what does that do for us?
It gives us a great source of models for things, um, it also gives us a great source of things for technology, I mean, in terms of modeling things, one of the things that's really This happened quietly probably over the last 20 years or so. Maybe that's why I did this great book called A New Kind of Science which was basically about doing science by thinking about things in terms of the computational universe and I don't know if it's causality or correlation, but in the years since that book came out, There's really been a big change in the way people have modeled things from a situation that had been going on for 300 years where people were making models of things, going into equations and so on.
After making those models, we get to a situation where people, when they create new models of things, they almost always do it using programs, not using mathematical equations, so that has been an important consequence of this kind of way in which enters the computational paradigm. science, um, and once you're doing modeling in terms of programs, you can see phenomena like computational irreducibility all the time and you realize how important simulation is to figuring out what's going to happen in systems, etc., but another consequence is in technology, um, what in a sense are we doing when we look, um, uh, when we look at the computational universe, we see this kind of inexhaustible supply of interesting ideas about things that can happen, interesting algorithms that do different kinds of things. things.
I don't know, let's run a different random cellular automaton, this is rule 73, okay, it does something different, uh, we can do like um uh, rule, um, uh, all these different rules are useful for different things in a sense, We can take what is in the computational universe and extract itfor different purposes of these 256 what I call elementary rules of cellular automata, has been truly extraordinary. In the last 40 years, how many of them have found uses? I like a 184 rule, which I always thought was a boring rule, turns out to be the standard minimum model for traffic flow these days and pretty much every one of these, um. these rules have something uh have a certain um uh have a certain um it's a different type of program okay, this is the minimum model for the flow of traffic on the highway these are free-flowing cars there's a these are traffic jams starting um and uh, um, but that's not the case, this is um uh, in addition to extracting these programs for models like road traffic flow, we can also extract them for things that are of technological use, as for example I mentioned.
That rule 30 is a good source of random numbers, it's also a good way to create certain types of cryptography systems, so in a sense we're going out into this computational universe and we're finding things we can do. we can consider it useful for our technological purposes um now, in a sense, this is something that we have been doing forever with physics, we are saying, you know, go out into the world and we notice that there are magnetic materials, we notice that there are, for example, liquid crystals. and we realize that yes, we can find a use for liquid crystals, we can turn them into displays or whatever we can, there is something in nature that we can use in a certain way and the same goes for the computational universe, there are these things that just exist in the computational universe and some of those things we can capture and extract for our technological purposes, so there's a way of thinking about our relationship with the computational universe that goes something like this.
There are things that humans want to do, there are goals that we have, there are things that we think about, etc., and then there is a kind of ocean of computational capacity in the computational universe and the question is how can we find the things that are useful for us? our particular goals, you know, you could look at one of these cellular automata and say, well, that's wonderful, it makes an interesting image, but I don't know what it's for, maybe it's useful for making art or architecture or something like that. maybe it's useful for processing images, but when we have a goal, then we can go out into the computational universe and see if we can find a computational system that can satisfy this goal, so one of the things and I don't want to go too deep into this, but one of the things that I have spent a large part of my life trying to do is build a bridge between what humans want to do and what the computational universe makes possible and the main focus of this has been to build a computational language that allow us to express things computationally so you know we're interested.
I've been using an orphan language here to do the things I've been doing here, but. You know you can, you can, I don't know what I could, um, I could pick up, I could do something random, let's say, um, I'll make a random graph, you know, or I could say, take that random graph and show it. I, do you know what the um uh communities that exist in this graph, let's say or could I say something? um, I could do all kinds of other calculations and the notion is, can we say something, can we? set it up so that when we want to just think about something in the world we have a language in which to think computationally and then a language in which that computational thinking that we do can be translated into something that can actually be executed computationally from this kind of computational universe of possibilities and one of the biggest things that's been important is making knowledge of the world part of the computational language that we have to be able to express things where we can say no.
You know, let's try something like this, say, capital cities of Europe. I can enter this in natural language. You know, our technology drives knowledge systems and things like Syria and Alexa, etc., and that's making use of this kind of idea of ​​being. able to take natural language and turn it into some kind of precise computational language that a computer can understand, so there are capital cities in Europe and you could say something like um oh, I don't know, let's just say um uh, let's take those that are now something as well as computationally represented things and let's say: let me make a picture of where they are or maybe I could do something more ambitious.
You might say um, let's try to take them and say uh. kind of like you know, okay, shortest path, um, so we can do that, okay, here we go and then let's say that's the best way to do this. I can take that and say um that uh, now I have those cities in that order. and then let me tell you that I should put all this together so you can see it together, but the basic point is that we should be able to express this idea that we want to find the shortest path between you, you know. capital cities of Europe we should be able to express that in this computational language and both we as humans can understand it and the computer can understand it and then execute it for us, this is something really powerful.
You already know my point of view. What we are achieving with this is something like this: 400 years ago, there were people who were thinking about doing mathematics and when they talked about doing mathematics, they generally talked about it in terms of words that described mathematical operations in words, but then people started inventing mathematical notations, things like plus signs and equal signs, etc., and then this mathematical idea really took off and you started having algebra and then calculus, etc., people started being able to systematically express themselves mathematically and once We saw the kind of rise of the mathematical sciences and so on, in a sense, what we are trying to do with our computational language is do the same kind of things, but now the computational paradigm has to provide a kind of full scale. language in which you can express things about the world computationally and be able, both for us as humans, to sharpen our thoughts, but also to get a computer to help you and actually do things computationally and make use of all the kinds of power that exists in the computational universe, so in a sense, you know, a big part of my life has been building that kind of bridge between what we humans think and what is possible in the computational universe and also attracting the kind of power that exists in the computational universe. computational knowledge that exists in our civilization that defines the things we care about, so to speak, okay, anyway, so it's kind of an introduction to how to think about this notion of computing, well, let's talk about physics. um, when I used to do physics, in the 1970s and so on, there were these paradigms for doing physics things like quantum field theory, general relativity and there was a lot to discover within those paradigms and I didn't really even know it. think too much about questions like um uh you know what um um uh what's underneath those paradigms what um uh but after having seen all these things about the computational universe I became a little more ambitious I have to think well, maybe underneath everything this complicated mathematical physics and so on that we know, maybe there's actually just one simple program underneath all these things and, like in the case of something like rule 30, we have this little program and it does all these complicated things, maybe like this It's how our universe works.
So I started thinking about that and in the early 1990s I was wondering if this could be the way things are, and in particular when you think about whether there's really going to be a simple program that basically makes our entire universe. One thing that is somewhat inevitable is that everything we know about our universe today, that there are three dimensions of space, that the mass ratio of the muon electrons is 206 points, whatever it is, all of those facts are not they fit into a small program, so all those facts have to arise from that program, there has to be some small program and only when we look at its consequences do we see all these features of the universe as we know it today um and so, in a sense, one of the questions So what could be the kind of fundamental raw material of the universe?
What could lie beneath things like space and time that we know today? I had some ideas about how it could work and I actually figured it out. um, in the late '90s, I had actually figured out a lot about how to use that kind of stuff underneath space and time and how to reproduce things like Einstein's general theory of relativity from that and so on, and I was kind of a part of this book, a book about a new kind of science, really the point of that book was to explore the computational universe as a way of understanding things in the world and I saw physics as a particular use case, people, people, I think physicists at that time were like oh no, no, we don't need a different way of doing physics, we're, we're fine, but it didn't work out that way, it was um, you know, there's been at different times. from history to people like We're very close to having a fundamental theory of physics, but it didn't work in the kind of traditional flow of quantum field theory and things like that, so, and for a long time, I was like The intention of Going back and exploring these ideas further I didn't succeed and then about a year and a half ago it was a couple of young physicists who interacted with me and said, look, you've actually done it, well, I had a new idea that It was kind of a technical breakthrough in how to think about these kinds of things, but they were like, this is really too important to let another 50 years go by and nobody look at it.
I got to work on this, in fact, one of them, Jonathan Gorad, I think he's on this live stream, he may already be answering questions and explaining things to people there, but okay, so the result of this is that we could. I really want to make a lot of progress in understanding the fundamental theory of physics, so let me try to describe a little bit of what we discovered. The first question is what the universe is made of. So first let me show you if you can. If we're interested in more details, there's a great website that has tons and tons of material, but there's kind of a visual summary here that's kind of a rough picture of how we think the universe is built.
The question is what the universe is fundamentally made of, and therefore the fundamental question is what space is. Well, that's not a question that people usually ask, you know, since Euclid, it's just been good, space is something where things exist, you know you can take. a point you can put it anywhere you want the point has no extension it is infinitely small and you don't have to say where it is but actually we don't think that is correct, we think that actually space is not a continuous thing where you can put the point where you want.
The space is discreet. It is actually made up of a bunch of discrete points. In a sense, we've seen this story before. When we think about something like water, we could say that well water is a continuous fluid. Well, we can pick any point in the water and there's still water there, but it's not true because actually water is made up of a bunch of discrete molecules and you either hit the water molecule or you don't, it's a discrete kind of thing, well, we think. space is like that too, space is not continuous, which is what has been assumed in both physics and mathematical physics, um and uh, it's a euclid and, after assuming that space is somehow continuous, we don't believe that is correct, instead we think that we should think that space is made of some kind of atoms of space, just as materials are made of atoms of materials and those atoms of space are just points, disembodied points, how is the space?
Well, we think that space is a big network uh and really all it says is that there is a point and the point has certain neighbors in this network the point is connected to other points, we could just say that space is made in a way abstraction of a bunch of elements and these elements have relationships that relate them to other elements and that's what space is in the sense that it's just this giant network that was actually previously a hypergraph, so an ordinary graph is something where each point, er, pairs of points, are connected to each other in a hypergraph. you can have more than two points connected by a hyperedge, but the basic idea is that it is this giant hypograph that contains many points like in our universe, there could be 10 to 400 points, a very large number of points in our universe and then only Just as water on a large scale seems continuous to us, so space on a large scale seems continuous to us.
Microscopic atoms in space could be from 10 to minus 100 meters wide, in comparison, as a proton is from 10 to minus 15 meters wide. and um uh, very, very small on that scale, but the idea is that the space consists of this discrete paper graph and so the question is okay, so if that's the case, there's something like what happens in a large scale, spacelimit seems continuous and three dimensional on a small scale, it may not be like that at all and you also can't guarantee that space is three dimensional instead of 3.1 dimensional instead of something else, all those things are determined. no, it doesn't have sense that these points exist in coordinates in space, they simply have a relationship with each other and it is something emergent that there is a continuous large-scale structure in space, that is kind of the notion of what space is, by the way.
I should say that um uh there is a question of, well, what normally exists in the universe in the current kind of physics where we have thought that there is a kind of background of space and then there is matter and energy and the things that exist in space. in this model it is minimal, the only thing that exists in the universe is basically space, only this giant hypograph could have 10 to 400 nodes and everything that exists in the universe is a feature of this hypograph, now that might seem like It might seem strange, but it doesn't seem so strange when you start looking at something like um, well, you know, for example, let's see if I can get this out again. um, uh, um, yeah, there's a um, uh, that uh, uh, cellular automaton where We know that everything here is made of a bunch of discrete black and white cells, but still there are these structures that emerge just as a feature of the dynamics of the system and that's why we think with space in the universe that there is only one discrete um uh only this discrete hypograph and the characteristics of that hypograph represent all the things that exist in the universe, all the electrons, particles and energy and all that kind of stuff, that's a little disappointing in a sense because, in a sense, when we work to determine how much activity of the universe is involved with maintaining the structure of space versus being involved with all the things that we care about, like the electrons and protons and all that kind of stuff, it seems like maybe it's about one part and 10 to 120 are the things we care about and the rest, uh, all the other parts from 10 to 120 have to do with just the space maintenance structure, okay, so how does that work?
That's kind of the structure of space, how does space work? What happens in space? You already know what the progression of time is. In the way we think about things, time is a calculative process. What happens as time progresses is that this hypergraph is progressively being rewritten. There are rules that say: For example, if you see a fragment of hypergraph that looks like this, rewrite it into one that looks like this and all the time throughout the universe this is what is happening, there is this structure, this hypergraph that represents space and that structure is continually rewritten. and that progressive rewriting is the progress of time and that particular kind of model of irreducible progress of time, this phenomenon of computational irreducibility that I mentioned before is something that is fundamental to the progression of time that um and that's it.
It also explains how thermodynamics works and a bunch of other things that, um uh, there's this kind of notion of computational progress which is the progress of time, so the first thing you might say is okay, that's great, but what? How can that be correct? We know. Since 1905 this idea of ​​special relativity that says that space and time are more or less the same kind of things, how can you say that space is the extension of this spatial hypograph and time is this progression of calculation with the time? Well, it's a bit subtle, but that's how it works, what happens is that when you are an observer embedded within this universe, you are part of this hypergraph.
There is a question about what you can really say about what is happening in the universe and it turns out. The only thing you can say is about the update of events and the relationship between the update of events in a sense, what you can see is that every time one of these rules is applied it is like a small event that happens is a small thing what happens to this network and you can ask the question when you look at those different events that happen what is that causal relationship which is an event that maybe can't happen yet because the input it needs has to come from the output to some other event and so there is something that defines this type of partial ordering of events that, um uh, that defines this type of thread of causality, this type of set of causal relations and from that this type of causal graph is constructed that represents the causal relations between events and their um, I mean, in a sense, a very extreme version of this, imagine that you are an observer in this universe and imagine that in the universe only one event happens at a time, but where that event occurs it spreads across all the universe.
You might say that theory can't possibly be correct after all. I can see things happening in other parts of the universe, but the point is that you can't see anything until that event has reached you, so to speak. As that thing spins, you're kind of frozen until you refresh and you can't know what's refreshed until you refresh and anyway, in the end, what you've realized is the only thing you can really feel. this kind of causal network of causal relationships between events well, it turns out that there is another important phenomenon that we call causal invariance that has the characteristic that, um, you might say, well, uh, I'm saying, I think I forgot to say it. one thing is when you make these updates there can be many possible updates that you could make in this network and the question is which do you do first which do you do next well it turns out that when there is this phenomenon of causality invariance it just doesn't matter in what order you make those updates, it doesn't.
No matter what causal graph you get, the causal relationship graph is not affected by the particular order in which you made those updates, it's something that just depends on the structure of the system, it's something that has this causal and variance property, well , that property of causal variance, and the fact that once you deal with this causal graph it turns out to be the story that leads to special relativity. It turns out that the phenomenon of the idea that that kind of physics is the same whether you're stationary or at one-third the speed of light turns out to be simply that those are different ways of sampling this causal graph and it's somehow guaranteed that we'll have the same causal graph because of causal invariance and in fact you can go even further and you can find that when you look at the spatial hypograph you can start asking questions like, well, let me, let me go.
With a little more detail here, you can start asking questions about the space hypograph. um let's start looking at some more details on how this works, so there's kind of a typical rule for the space hypograph and this is, let's say. We start from that initial condition, we can just run it, it's a bit like the cellular automaton, we're just running it and although the rule is very simple, the structure we get can be quite complicated and if we change. the rule we'll get different types of structures, we can make all kinds of crazy things happen now just sometimes here's a rule, uh, it's another pretty simple rule, we run it in this particular case, we get a structure that's pretty simple, it's a kind of weaving this kind of simple cloth-like thing that looks to us like a kind of two-dimensional grid and sometimes we can get here's another uh this is another simple rule this creates a thing that looks like this now, when we draw these images, It's worth understanding that these are just ways of representing this graph, the only thing we know for sure is how the points are connected, so this is the same graph but represented in different ways so that the same particular node will be connected to the same nodes, but here we are drawing it in different ways, so the question is when we look at these graphs and we imagine a very large one of these graphs, what is the effective dimension? of this thing in other words, if it were to behave like a grid like this, what dimensional grid would it be, would it be a one-dimensional thing, a two-dimensional thing, a three-dimensional thing and how do we know there's a pretty easy way to say it, here's how works, so let's say we have something that is a grid, we can easily say that it looks like a two dimensional grid, we start at some point on the grid, we just grow this region progressively. region and we say how many points there are if we follow our steps here, how many points do we reach?
Well, here it's r squared, it's forming just a square if we go to the version on another graph that can be drawn as a three-dimensional grid then we do the same calculation, the same thing uh where we look at the growth of this um of this type of region will grow as r cubed and this and that cubed that third power is what tells us the effective dimension of this network, we can do exactly the same thing for one of these systems that we are developing from our underlying hypergraph rules and we can determine which ones are your effective dimensions, just a graph showing how to solve it in that particular case, the effective dimension is about 2.6, so for any of these types of uh, for many, not all, for many of these types of hypergraphs, etc., there is a limiting dimension, there is some defined value of the effective dimension of the thing.
When you look at it as something that is kind of a large-scale continuum, it turns out that not only is there one dimension, there can also be a curvature, so in this particular case we say it's roughly a two-dimensional grid, but it looks like it's curved. well you can calculate curvature the same way you calculate dimension you just say we start from a particular small region and start growing it and say what is the area of ​​that region so you know the area of ​​a circle pi r al square but if you draw a circle on a sphere its area is not exactly pi r squared it is pi r squared with a correction that depends on the curvature of the sphere and you can do exactly the same with these hypographs and since you can calculate the curvature effective continuous space that corresponds to the limit of this hypergraph.
Why is it so significant? Well, it's significant because the whole idea of ​​Einstein's general theory of relativity is the idea that space can be curved, the idea is that you know that light always travels in what it thinks are straight lines, you shine a laser, it will go in a straight line, the question is what is a straight line, because in us, in an airplane, straight lines are ordinary straight lines, but in a sphere in a curved space, straight lines are not straight when you look at them from the outside and what is happening in Einstein's theory of gravity, the basic idea is that the presence of mass and energy causes curvature in space and then what counts as a straight line is different and is deviated by the fact that there is a curvature in space and it is that deflection that is responsible for the change in motion that is associated with gravity.
Well, exactly the same thing happens in our models. We get that the boundary of these hypergraphs has curvature and we can figure out that it turns out to be a big clue very quickly. I knew for a long time that the correction term is proportional when thinking about Richie's scalar curvature, which is exactly what happens in Einstein's equations, so what we discovered in the end is that the large-scale limit of our um The behavior of our network behaves the same as physical space according to Einstein's equations, according to Einstein's channel theory of relativity, in a sense it is analogous to the following: if we look at a fluid like water, we have a lot of molecules bouncing and we can say what are the equations of the continuum that satisfy the continuous fluid thought in terms of velocities and things like that, rather in terms. of individual molecules that same type of derivation that we can now do for space, we are making space from atoms of space and we are seeing what are the equations of the continuum that arise from that and the equations of the continuum, a very interesting fact, end up being exactly Einstein's equations now, the way Einstein's equations are set up there's sort of two pieces, one is Einstein's equations in vacuum, what happens when there's no matter around and the other is Einstein's equations in the presence of um of matter and um uh i. put my window here um you know um and um uh the question is how can we find out?
I said mata is just a feature of this spatial hypergraph, but one of the important things about mata is energy um and To my surprise, it turns out that they ended up being a massive way to understand what energy is in our models. Energy is basically the amount of activity in the network, so you can think of it as this causal. In the graph there are these kinds of causal edges that represent the kind of fact that one event happened and affects another event well, if we look at the kind of flow of causal edges through what is called in Relativity Theory Space as hypersurfaces which is a kind of constant time surface.
If we look at how much activity occurs over time, thatamount of activity flows from causal edges through space as hypersurfaces more formally is precisely energy density in our models. So in a sense, a feature of the causal graph energizes us. One feature of the causal graph that the flow is lateral through time, like hypersurfaces give momentum and, uh, it's very nice, you can easily derive m c squared from this, which is something we haven't been able to derive before, it's just been a fact of nature before we can derive it from properties. of these causal graphs and so on, well, when we have this idea of ​​energy associated with these causal edges, it turns out that the way the energy goes in there gives us precisely the full version of Einstein's equations, let me explain to you a part of This is one of the things that is important when thinking about spacetime is this idea of ​​jd6 shortest paths, the kind of straight line, although space can be curved, so the line may not actually be straight, so this notion of gd6 jd6 are the particles that are not acted upon by forces other than the forces associated with the curvature of space gravity, um, they are the paths that they follow, so the question is what does gd6 do in our models, well, it turns out that gd6 are deflected by the presence of causal edges and that is something that can be done, it is not very difficult to see that that has to be the case, but it turns out that the amount of energy present when there are causal edges There is currently a deflection in jd6 and that deflection in jd6 is precisely the deflection associated with the force of gravity.
Such an interesting fact is that one can derive general relativity from these models, so we start only from the type of space atoms. and this kind of general idea of ​​these rewrite rules and so on, and we can derive the generality, we can derive features like black holes, etc., you can actually see what, um, when you look at a causal graph, this is kind of typical causal graph here is a causal graph that has an event horizon, so a causal graph, each of these yellow dots is an event and these edges say that this event affects this event that affects this event that affects this event and so on, but if you're over here with events over here these events here can never affect these events here and that's what happens when an event horizon forms this is effectively like a black hole actually this particular one is more like the cosmological event horizon but um it's a it's a place where one part of the universe can no longer have a causal effect on another part of the universe and that's another thing that you see in these models and by the way, what I looked at recently was a question about faster-than-light travel in these models and it turns out I can answer questions about this.
It turns out that the problem of traveling faster than light becomes a problem that is more like a generalized engineering problem than a physics problem. It's like being able to do it faster than Traveling with light is like asking all the gas molecules in a room to come together, let's say that one gas molecule in a room finds exactly the right way to jump between other gas molecules so that it can go very fast. fast to the other side of the room ends up being a kind of faster-than-light traveling problem, and it's something that, in principle, when you look at things in terms of these types of models, in principle is as possible, although in practice it may be so impossible.
What is it like to get a molecule that can do enough calculations to see where to go to overcome computational irreducibility to do that, okay, I won't go too far in that direction, let me get back to um uh talking about some sort of general history of physics here, okay, so a big part of physics is the idea of ​​gravity and the large-scale structure of space-time, another big part of physics is quantum mechanics and the small-scale structure of space. and the small-scale structure of the universe, so in classical physics what we mostly get are models that say definite things are going to happen.
You know, you throw a ball, it follows a particular trajectory. The great idea of ​​quantum mechanics from 100 years ago. In reality, there are no defined things, they do not happen, but rather all possible paths are followed and we simply come to know certain probabilities of outcomes, so in a sense there is no deterministic trajectory for the ball, it is as if it were divided in all possible paths and just getting to see certain probabilities, okay, so how does that work in our models? Well, what's really interesting is that quantum mechanics is not something that is inserted into our models, it is something that cannot be avoided in our models, it is inevitable in our models and the reason is This is what I talked about about how to do these updates.
The problem is that these updates are done whenever possible, but there are many possible ways to do these updates. There are many different ways these updates can be done. There is no single way updates can be done. So what you get is what we call a multi-way system where you start from this initial condition. In this particular case, there was only one way to do the update, but here were three ways to do it. You end up with three essentially different configurations of the spatial hypergraph associated with these three different ways that updates can be done and down here you get more and one thing that's interesting is that you don't just get these branching events where two different things can happen.
We also get merge events where a different sequence of possible updates could end up in the same state after a few steps. Well, it turns out that this branching in this multi-wave graph seems to be the story of quantum mechanics. give me a quantum mechanics and um and what's happening here is essentially the um the different possible paths that you follow here correspond to different possible types of quantum possibilities uh uh uh uh the way to represent quantum mechanics um uh which is kind of The The most efficient and effective way to do this is through something called a Feynman path integral, where the idea is that you're defining all of these possible paths and each one is given a certain weight, so what happens is that, that's what's happening.
Here and when we see how to explain this, one of the critical things, let's go back to space-time. When we observe space-time, we do so by choosing a certain frame of reference, we choose a particular way of defining it. what counts as what is at the current moment when is where there is simultaneous simultaneity at good time, we have to do the same thing instead of observing quantum mechanics, we have to choose what we call quantum observation frameworks, which are a kind of analog direct from them and in a sense, those quantum observation frameworks correspond to different measurements that we can make on quantum systems and one of the things about quantum mechanics that has been very difficult to understand is why that is the case when quantum mechanics has all these possibilities, why? that people think that certain things happen well in our model, what happens is these different quantum observation frameworks that essentially correspond to different choices of how to make measurements because of this phenomenon of causal invariance, they all in some sense come to the same conclusions , it's the same reason that relativity works is the reason we have objective reality in quantum mechanics, well, there's even another thing which is if we look at this multi-wave graph and say, um, if we go back to this causal graph here , if we look at this causal graph. graph and we say this is decreasing over time and we say let's do a slice at a particular time and see what we conclude about the universe based on what is happening at that particular time by looking locally at this causal graph when we take that slice what will we achieve? effectively analyzing that causal graph.
Basically, we will reconstruct the spatial hypergraph that represents the instantaneous structure of space. We will do the same on the multi-wave chart. What do we get? We get this thing we call bronchial graph. This bronchial chart is essentially a representation of what the structure of the branches in the system is, the structure of which was related to what having a common ancestor in this in the set of branches, in a sense, in the world from quantum mechanics, this is a map of quantum entanglements, these different nodes here in this branchiography graph are different quantum states and these connections represent a kind of space of closeness and entanglement of those different quantum states, it is like in the spatial hypograph which follows these connections in the spatial hypergraph.
It tells us about the closeness of points in space, this tells us the closeness of points in gill space in the space of quantum states, so why is it interesting? Well, one of the things you want to do is figure out how it works. in quantum mechanics there are definite equations that say, for example, in path integral there is an equation that says that the weighting of different paths is given by e a e a i multiplied by s, which is called action divided by h bar, okay. so what we can see is this in this multiway graph, we can ask just like we asked about gd6 and spacetime, we can ask about gd6 and the multiway graph and we can ask about the deviation of gd6 in the multiway graph and it turns out that again I'm squeezing in a lot of physics and kind of abstract stuff here and I'm just going to give you an outline of what happens, but basically, um uh, this idea that energy is a flow of causal edges that it also works on the multi-way graph, there's a notion of a multi-way causal graph and there's a notion of a sort of quantum mechanical analogue of energy um and uh what happens is that the deviation of jd6 on the multi-way graph is determined by the presence of energy in this multi-way causal graph and the deviation of gd6 basically in physical space the deviation of gd6 means your particle went somewhere different in physical space in in this multi-way graph the deviation from gd6 says that you go somewhere different in the gill space, so it turns out that again in quantum mechanics one of the most mysterious things is the presence of complex numbers when describing the kind of amplitudes for different things to happen in quantum mechanics, In fact I think the kind of packaging of that as some kind of single complex number instead of the magnitude and the phase being separate is kind of a mistake.
A mistake has been made, but in any case the basic point is that the phase of that complex number turns out to be essentially a position in the bronchial space, so the picture we have here is the presence of energy that deflects gd6 inward and causes them to end up in different places in the bronchial space and that deflection process is what ends up in different places in the gill space. It's changing the phase of a quantum amplitude which turns out to be exactly what the path integral, the female path integral, says quantum mechanics should do. that action is actually basically an energy density and this basically means that the presence of energy density deflects gd6 but not in the physical space but in the bronchial space and that shows up in the phase of these quantum amplitudes, so this is to me one of the most spectacular things here is that, um, what this says is what leads to Einstein's equations in physical space.
The presence of energy causes the deviation of gd6 which leads to Einstein's equations in physical space. Exactly the same phenomenon that operates in the branch shield space leads. to the Feynman path integral, in other words, these two types of strands of 20th century physics, general relativity and quantum field theory, are actually the same theory that was developed in different spaces and has been a big mystery for a long time how those theories fit together, they are basically the same theory and what happens is, for example, when you look at black holes and you look at quantum mechanics from the relationship with black holes, you see the interaction between what is happening in space and what is happening, what is happening in physical space and what is happening in branch shield space, we don't understand all the details of how it works, but a lot of the things that have been very difficult to understand, very paradoxical , they're starting to look very natural in this frame that's pretty cool, well, that was kind of a sketch, I don't know how I'm doing it in time, horribly, actually, um, okay, I should, uh, um, that was a kind of sketch of how this theory of physics works, um, could. go into a lot more detail about how we understand particular phenomena in physics, um, let me, um, let me finish by talking a little bit about, uh, what do we learn from the fact that this seems to be how? heuniverse works and we can go into more detail about how we understand the different phenomena and the physics of this model, but it's going very, very well, I mean, you know, we are progressively looking at different phenomena and general relativity and quantum mechanics, etc., and understanding how those things work, where we're gradually understanding all kinds of experiments that could be done that would explore aspects of the universe that in many cases we hadn't thought to look at before and that are informed by these models that actually work, you know that deviation of light in this situation should be 1.7 times, that's a lot of hard work and we're not there yet, we're more at the point of saying these are types of things you should look at like the dimension change in the universe as in the early universe, perhaps the universe started with infinite dimension and gradually cooled to the point where it is three-dimensional, but perhaps there are signs of dimension change that still exist and are detectable in the current universe and so on , but you know things like that, but anyway, let me talk a little bit about the bigger picture of what that kind of calculation means all the way down, so to speak, I think one, one, one.
The thing to understand is that I mentioned computational irreducibility and I mentioned the phenomenon that um uh um uh I mentioned this phenomenon that um uh uh when you run a system you may not be able to quickly predict what it's going to do, you may just have to do it. follow each step and see what happens, obviously that could be a real killer for a theory of physics because it could be the case that yes, we have the right theory, we have the right rewrite rules for our hypograph, but running it wrong 10 a The 100 steps to discover what happens after 14 billion years cannot be done.
We might be able to calculate what would happen in the first 10 to minus 100 seconds, but we wouldn't be able to go any further. I would be a little bit trapped by computational irreducibility and I actually thought that's what was going to happen, I thought we were going to be able to say yes, we have a really good theory for the first 10 to minus 100 seconds of the universe. but that's as far as we can go, but we basically don't have the achievements of 20th century physics and what we realized is that whenever there is computational irreducibility, there are always pockets of computational reducibility, there are always some things that You can say and what happens in this case is that these sources of computational reducibility are precisely 20th century physics.
There are two broad types of things that can be said, one about general relativity, one about quantum field theory, and both manage to pass through this kind of general irreducibility. It's like underneath, when you look at all the details, everything is irreducible and there's nothing you can say, just like you look at some water underneath, all the molecules bouncing around in very complicated ways, but yet we can say that the flow It generally works this way or that way, in a sense, it is inevitable. This has to be the case because we know that there are things in the universe that are predictable.
We expect from the principle of computational equivalence that there will be computational irreducibility, but in a sense, the way we operate in the universe has to be such that we are introducing those aspects of those portions of computational reducibility so that we can make predictions about what will happen in the universe. universe and, in a way, leading our lives, so to speak. in a sense, computational irreducibility is what makes it interesting because you know if everything was computationally reducible, we could say well, I live my life, but I already know the answer is 42. I don't need to go over everything. these steps of living my life I already know the answer computational irreducibility tells you that not really, that is not the way things work, there is something inexorable that is achieved over time, it is this irreducible calculation that is achieved over time, so in a sense, what we're seeing in 20th century physics represents the pockets of computational reuse in this kind of bed of computational irreducibility, so the question is, kind of a big question, okay. , let's say we find a rule and we have not yet found one that reproduces all the known features of physics.
We have found rules that are reproduced. When a particular rule reproduces certain features of physics. We haven't found a single rule that together does all the things you need. You know, finite-dimensional space particles. etcetera etcetera etcetera we do not have a single rule that does all those things we know that within this class of rules there are some that do all those things but we do not have a single one that does all those things at the same time, but let's say we had that rule, we could say here, I can show you, you know, let me present it to you, here it is, this is what I can write in a small space, there is this. little program and it makes our entire universe okay, so it's not really surprising because why should we have been the lucky ones who got that very simple to describe universe?
Why do you know, in a sense, that that was the story of Copernicus? You know before 500 years ago we were like the Earth was the center of the universe, life, you know we are unique, there is nothing, you know, it is and then the great realization that came from a lot of science was that no, it doesn't. are. Unique, we just are, you know this random planet in this country, you know the edge of some galaxy, etc., there's nothing special about us, so if we find the fundamental theory of physics and it's this little rule, it's like how is it that we got the small one? little rule, we didn't get the incredibly complicated rule, how did we get this particular one?
So I thought we weren't going to be able to say much about that question, but I was wrong, it turns out there is a notion like me. I talked about this multi-way graph that represents following all the different possible updates that can be done according to a particular rule. There's kind of what we call a multi-way rulial graph that goes even one step further in general and says not just at each step, can you follow the? can you do the particular ways you do it? can you make different decisions about where to apply the rule? you can also apply different rules and that's why this giant multi-way graph you say how the hell could you say something about this real multi-way graph, well it turns out it has causal invariance and that means that when you put a frame of reference on this rule graph multi-way, all frameworks, in some sense, give the same predictions.
What it says is that there are many different ways you can describe the universe, but within those description languages ​​it will always be the same universe, just described in different ways, just as the frames of reference in relativity correspond to different states of reality. movement, frames of reference. and quantum mechanics correspond to different options of measurements, the reference frames in this rule space correspond to different options of the type of general description language that is used to describe the universe, so you know, since I've spent much of it of my life as a computational language. designer, this is a funny conclusion to me because, in a sense, what it says is what is happening: we have to come up with our project of finding a fundamental theory of physics is essentially a language design project, it says: finding a description language that is useful to us, that we can implement on our computers and that represents the universe and there is not just one of them, there may be many different ones, but some of those description languages ​​may be completely unsuitable for us humans, You know? we have particular senses, we are used to the idea, you know that we have a particular size in the universe, so you know, in the size that we have, we look around us, the speed of light is very fast compared to the speed of processing of our brain that we imagine. things that happen in slices of moments in time that things happen across space in moments in time if we were much larger if we were some kind of planetary scale, the speed of light wouldn't make us conclude that things like that were happening , so you know the description.
The language we have for the universe is very tied to who we are, so to speak, and one of the things you realize is that there is actually an infinite collection of different description languages ​​that could be used for the universe. and that they are completely incoherent with each other. another one, so ours is the one that works according to our senses that give us the experience of the world, but we could have a completely different one and you know, it was used to think that you know if there is, if there is, uh. extraterrestrial intelligences at least we share the same physics with them, but it's not true, we can expect that there may be these kinds of completely incoherent views about what, um, what, how the universe is set up based on these kinds of There are different languages of description to describe things, but the really remarkable thing is that even with all that kind of clumsiness we can draw definitive conclusions and, in fact, there seems to be a very beautiful way in which some kind of very abstract ideas are expressed, um, in math, uh. that involves things like the theory of higher categories and so on, which somehow converge with these ideas about physics and it is possible to understand that type of thing that we have been interested in is how the type of metamathematics The theory of all possible mathematics converges with the way the physical world works, um, but these are things that emerge from this computational vision, right, I should end here, but I think the main thing is The main story here is that in the last 45 years the most important thing that has What's happened is this kind of idea of ​​computing, which is thinking about things in computational terms, thinking about programs as descriptions of things, thinking about a kind of world at a practical level in terms of computational language. to have this computational language to describe the world to move from human thoughts to things that can be implemented computationally in a sense to use that computational language to enable computational x for all fields x but then, in particular, for physics, to use this idea of computing to try and see how we dig deeper, how we figure out what is the machine code under which all the physics that we know and our universe operates, and I think what to me is super exciting and very unexpected actually is that we are really in the point where I think we know basically what that machine code looks like, we don't know all the details of the instruction set, but we know what the basic structure of that machine code is and we can see how it all works.
These different things that seemed very beautiful but very incoherent in physics fit together and one of the things that has been exciting is that there have been a lot of approaches to mathematical physics that have emerged particularly in the last few decades that are very beautiful mathematically but not necessarily they stick to the way that physics seems to have been done and what I think our models have provided is a kind of rosette stone that allows you to understand how all these different beautiful pieces of mathematical physics how they relate to each other and how they also they relate to at least being idealizations of the kinds of things that happen in fundamental physics.
Okay, I should end there, so thank you very much and I'm happy to try and have questions to discuss, um like uh. As long as we can go thank you

If you have any copyright issue, please Contact