YTread Logo
YTread Logo

A New Kind of Science - Stephen Wolfram

Jun 09, 2020
and it's particularly appropriate that at this year's H Paul Rockwood Memorial Lecture from 2003 we welcome here today Steph Wolfram, who has really made an enormous contribution to computer

science

, to computer

science

and also through from the development of the Mathematica program to research in many areas of science and mathematics now, the other factor that influences Steven's experience at the conference is the fact that he comes from physics, in fact, it was the PhD in physics youngest in celtech history to win MacArthur genius. award and he was the youngest to receive that he was at the uh as a professor at Caltech he moved to the Vance Institute of Studies where I met him for the first time and from there to the University of Illinois where he started a complex systems research center and Finally, By starting the company Mathematica, now based in Urbana Champagne, he truly set the standards for math programs, so without further ado, I would like to introduce you to the president and CEO of Wol Form Research, the creator of Mathematica and the author of A.
a new kind of science   stephen wolfram
New Kind of Science Steven Wolfram well, thank you very much, Terry. I'm sure many of you have seen this great book somewhere. It took me about 10 years to write it and I'm very proud of it. And in fact, by now I hope that some of you have at least had a chance to read it well. What I want to do here today is tell you a little bit about what it says, and actually, what I want to do is tell you. You talk about a

kind

of intellectual structure that I've spent most of the last 20 years building and I think it's a structure that has some pretty interesting implications both now and in the fairly distant future.
a new kind of science   stephen wolfram

More Interesting Facts About,

a new kind of science stephen wolfram...

Back in the late 1970s I was a young physicist working mainly in partial physics, but I also worked a certain amount in cosmology and from there I became interested in the question of how structures in our universe emerge from the galaxies downwards. Looking at that question, I quickly realized that it was actually an example of a much more general question like how does something complicated occur in nature? Well, there are many everyday examples: snowflakes, turbulent fluid flows, plant and animal shapes, many others. My first assumption was that with all the sophisticated type of man I have.
a new kind of science   stephen wolfram
I knew from particle physics and such that I could easily figure out what was going on with these

kind

s of everyday systems when I actually tried to do it, although it just didn't seem to work and gradually I started to think that maybe there might be a fundamental problem with the whole approach I was using. . If you look at history, the idea of ​​using mathematics and mathematical equations to understand nature has been kind of a defining characteristic of the exact sciences for maybe 300 years and certainly. It worked extremely well for Newton and his friends and for discovering the orbits of comets and for many, many things since then, but somehow, when the behavior one is observing is more complicated, it just gradually doesn't seem to work as well. .
a new kind of science   stephen wolfram
What I started to think was that that was why there had never been a good theory for complicated processes in nature, in physics and particularly biology etc., and I started to wonder if somehow there might be a way. to go beyond theory. The usual paradigm of thinking about mathematical equations in thinking about nature was around 1981 and it happened that at that time I had just spent some time developing a large software system called s SMP which was in some ways a precursor to Mathematica and at the same time less The core of SMP was a computer language and what I had done to design that language was to sort of try to think of all the calculations that people might want to do and then try to identify Primitives that they could come together to build.
Those calculations had worked quite well and, in fact, in a much more evolved form. I think it has now worked spectacularly well in Mathematica, but anyway, in 1981 I had the idea that maybe just as I had been able to find Primitives. for the calculations that people want to do, somehow I could also find Primitives for what nature does and the crucial thing I realized is that those Primitives don't have to be based only on traditional mathematical constructions, I mean, if one To do theoretical science you have to assume that nature follows some kind of definite rules, but why do those rules involve only the kinds of constructs that have been invented in human mathematics, things like numbers and exponential calculus, etc.?
More general: Well, in the past there really wouldn't have been any systematic way of thinking about such things, but now we have computers whose programs implement arbitrarily general rules and the idea I have is that perhaps the kinds of rules that can be built into the programs might actually be what nature is using, so what kind of programs might be relevant, since I know mathematical things. I would assume they have to be at least somewhat complicated, but I still thought I should at least start by looking at really simple programs. So in Practical Computing we were used to fairly long and complicated programs that are set up specifically for particular tasks, but what I wanted to know were really simple short programs, one is chosen at random, so what do programs like that do?
The simplest possible computer experiment runs the simplest programs and sees how well they behave back in 1981. I came up with particular types of programs to test that are known as a cellular autometer, so this is how a simple cellular automaton works: you start with a row of cells. each is black or white, then the cellular automaton evolves down the page and the color of each cell at each step is determined by a rule defined from the color of the cell and its neighbors in the previous step. In this particular case the rule is really simple, it simply says that a cell will be black whenever it or its neighbors are black in the previous row and what happens is that we get a simple uniform black pattern, well we can use that icon in the bottom to represent the ruler. the program we're using, so what happens if we change the rule a little bit well now, instead of just getting a uniform black pattern, we get a checkerboard?
So far, none of this is terribly surprising, we're using very simple rules. getting simple patterns, okay, let's try another rule, it's the same setup as before, but what's happening here looks like we're not getting any kind of simple repeating pattern, let's run it a bit more, it gets quite complex, although we can see that. is very regular, it is simply a self-similar or fractal pattern newly formed from identical nested pieces, well, one would think that if one has a simple rule and starts with a single black cell, then one would always have to obtain a pattern that somehow very regular way like this, at least in 1982, that's what I assumed was true, but one day I decided to try a very systematic experiment and run each of the 256 simplest cellular automata rules possible, so when I got to the rule number 30, this is what I saw, so what's going on here, well let's run it a little bit more, there's a little bit of regularity there on the left, but otherwise this looks really complicated, actually kind of random, When I first saw this I thought it might be a problem.
With our visual system, there really were regularities, but we just couldn't see them, so I did all kinds of elaborate mathematical and statistical tests and what I found was that there was not, as far as I could see, anything like the central column of cells here. it really was completely random, well this is quite surprising, we have a very simple rule: we start from a single black cell, but what we are getting is an incredibly complicated pattern that seems random in many ways, so it just doesn't seem right. We put so little in but are taking out so much that it is not what our ordinary intuition says should happen.
I'm talking about our everyday experience and, when doing engineering, what we are used to is that to do something complicated we have to do it in some way. Start with complicated plans or use complicated rules, but what we're seeing here is that, in reality, even extremely simple rules can produce incredibly complicated behavior. Well, it took me years to accept this phenomenon and, in fact, it has gradually overturned almost everything I thought. I knew the fundamentals of science and that is what has led me to spend the last 15 years building a new intellectual structure, really a new type of science.
Well, okay, after you've seen rule 30, what else can happen? Here is another of the simplest 256 cellular autometers. This is Rule 110, so this one grows only to the left. What are you doing? So let's run it some more. It does not generate any type of uniform randomness. Instead, we can see complicated little structures running around. We can ask what is going to happen. The only way to know seems to be to just keep running the system: will we get more small structures or are they all going to disappear? What is going to happen long after 2780 steps, we finally get the answer, at least in this case.
We basically end up with a single structure, but it's amazing how much we get by starting with a little black cell and then following a really simple rule. Well, in the mid-1980s I studied a lot of cellular autometers and saw these basic phenomena, but I wasn't sure if they were somehow specific to the cellular autometer or more general, and I had the practical problem that there was no way easy to set up all the different types of computer experiments you would have. What to do to discover that, in fact, that was one of the main reasons I started building Mathematica.
I wanted to once and for all create a system that I could use to do all the calculations and all the computer experiments I wanted. For 5 years, the task of building Mathematica and our company consumed me quite a bit, but finally, in 1991, I decided that I could start looking at my scientific questions again. In fact, they were really amazing things that had taken me days to do before. In minutes it was easy to do all the experiments I wanted, I mean I guess it was a bit like how it must have felt when telescopes or microscopes were first invented, you could just point them somewhere and almost immediately see a whole world of new phenomena. like small microscopic creatures in pond water, etc.
Well, in the computing world, you just pointed Mathematica somewhere and suddenly you could see all these amazing things, so the first question was how special are cellular autometers? Well, I looked at a lot of cellular devices. autometer and often saw very complicated behavior. What about programs with different configurations? However, since here is a spinning machine, for example, it has a row of cells like a cellular automaton, but unlike a cellular automaton, only one cell is updated at each step, but that cell moves. Well, the first touring machines only did this kind of thing, but if you continue with very simple rules, suddenly you see the same kind of complexity that we saw in the cellular autometer.
So what about other types of programs? Well, I looked. in many different types, sequential substitution systems that rewrite strings like an iterated text editor, register machines like IDE, machine language minimal idealizations, also symbolic systems like combinator generalizations or Mathematica type of minimal idealizations, it's the same story in two dimensions in cellular autometer or in Turing machines or in three dimensions every time one sees the same thing even with very simple rules it can become extremely complicated Behavior does not even need rules that specify an explicit process of evolution the restrictions work too much like here it is the simplest set of tiling constraints that force a non-periodic pattern and here are constraints that give a kind of crystal that is forced to have a rule 30 pattern, everywhere you look it's always the same incredibly simple basic thing.
The rules can be incredibly complicated. The behavior seems to be a very robust and very general phenomenon, but how is it possible that something so fundamental has not been known for a long time? Partly it is because one can only easily see it by doing a lot of experiments with computers and only with computers and particularly with Mathematica that have become easy to do, but the most important thing is that with our ordinary intuition there just didn't seem to be any reason to try the experiments , they seemed so obvious that they would not show anything interesting now that one made the discovery clear. that simple programs Simple rules can produce complicated behavior, one can go back and find all kinds of clues about itFor a long time, I mean, almost 25,200 years ago, for example, the Greeks looked at prime numbers and there's a pretty simple rule, a pretty simple program. to generate all the prime numbers, but the sequence of prime numbers once generated seems remarkably irregular and complicated, there is also a fairly simple rule for generating the digits of pi, but once generated that sequence seems completely random to us and, in fact, there are many such cases of What happens to numbers, I mean, this is what happens if you write successive powers of three in base 2, a kind of minimal version of a linear congruential random number generator and it's already incredibly complicated , well, you can also make very complicated sequences from simple integer occurrences and here for The example is the simplest primitive recursive function that has complex behavior.
Well, what about other number-based systems? What about, for example, that favorite thing about traditional mathematical science, partial differential? Does its continuous nature make the equations work differently? The ones that people usually study show just some pretty simple behavior, but by doing an automated search in the space of possible symbolic equations, I ended up finding these creatures that are just simple nonlinear pdes but even with very simple initial conditions they end up doing all kinds of complicated things it's actually a little difficult to say exactly what they do. I mean, we use these PDs to test our ever-improving state-of-the-art PD solving capabilities in Mathematica, but with continuous systems there's always a problem because you end up having to discretize them and without knowing more or less the answer it's hard to know if What you are seeing is something real, probably that is why among mathematicians numerical experiments can sometimes have a bit of a bad name, but in a discrete system like the rule 30 there are no problems like the bits are the bits and one can see that one can say that one is seeing a real phenomenon.
Actually, I think rule 30 is so simple to set up that there's really no point. the Babylonians couldn't have done it and sometimes I wonder if something like this some ancient mosaic of the rule 30 could be unearthed one day, but I rather think that if the rule 30 had really been known in antiquity, many ideas about science and nature would have developed somewhat differently because, as it stands, it has always seemed a great mystery how nature could apparently so easily manage to produce so many things that seem so complex to us. I mean, it's like nature has some special secret that allows it to make things that are much more complex than what we as humans normally construct and it often seems like it must be evidence that somehow there is something more. beyond human intelligence involved, but once an SE in rule 30 suggests a very different explanation, it suggests that all it takes is for things to be in nature to follow the rules of typical simple programs and then it is almost inevitable that, as in the case of rule 30, its behavior can be very complex.
I mean the way we as humans are used to engineering and building things that we tend to operate under the constraint is that we have to foresee what the things we are building are going to do and that means that we have ended up being forced to use only a very special set of programs that always have simple predictable behavior, but the point is that nature is supposed to be under no such restriction, which means there is nothing wrong with using something like the 30-and-5 rule. That way inevitably produces all kinds of complexity. Well, once one starts thinking in terms of simple programs, it is surprising how easily one can begin to understand the essential features of what happens in all kinds of systems in nature, so let's talk about simple examples: flakes of snow, crystals, they start growing, they start growing, starting from a seed and then successively adding pieces of solid and one can try to capture that with a simple two-dimensional cellular automaton, I mean imagine a grid where each cell has solid or not , then you start from a seed and have a rule that says that solid will be added in any cell adjacent to one that is already solid, so this is what an ordinary looking faceted crystal is understood to be here on a square grid.
Well, one can do the same kind of thing on a hexagonal grid, but for snowflakes there is obviously an important effect missing here and what it is is that when a piece of ice solidifies. water vapor releases some latent heat and that prevents more ice from solidifying nearby, well what's the simplest way to capture that effect? One can simply change the cellular automaton to say that ice is added only if the number of neighboring cells that are already ice is exactly Okay, so what happens? Well, here is the answer and these are all the stages that one sees and they look very similar to real snowflakes.
It really looks like we've captured the basic mechanism that makes snowflakes have the shapes they have and we get several predictions, like big snowflakes will have little holes in the places where the arms collided, etc., which in fact they do. works fine, but even though the images we have look very similar to real snowflakes, there are obviously details that are different, one thing that one has. However, what you have to understand is that that will always happen with a model because the goal of a model is to capture certain essential characteristics of a system and idealize everything else and, depending on what one is interested in, one can choose different characteristics. to capture, so the cellular automaton model is good, for example, if one is interested in the basic question of why snowflakes have complicated shapes or what the distribution of shapes will be in some population of snowflakes, but it is not as useful if one is trying to answer a question such as specifically how fast each arm will grow at a certain temperature.
I could say that there is actually a general confusion about the models that often seems to arise when people first hear about the cellular autometer and they will say okay, it's great that the cellular autometer can be reproduced. What snowflakes do, but Of course, real snowflakes are not actually made of cellular automata cells. Well, the whole point of a model is that it's supposed to be an abstract way of reproducing what a system does. It's not supposed to be the system itself. I mean, when we have differential equations. that describe how the Earth moves around the Sun, we don't imagine that inside the Earth there are all kinds of little mathematics that solve differential equations, but rather that differential equations represent abstractly the way the Earth moves and it is exactly what Same with with cellular autometer based models I mean that cells and rules abstractly represent certain characteristics of a system and again, which abstract representation, what type of model is best depends on what one is interested in.
In the case of snowflakes, there are certainly traditional differential equations that could be used but they are complicated and difficult to solve and if what we are really interested in is the basic question of why snowflakes have complicated shapes, the model Cellular autometer is a much better way to get there. Well, let's take another example, let's talk about fluid. turbulence, so whenever there is an obstacle in a fast-moving fluid, the flow pattern around it seems complicated and quite random, but where does that randomness come from? Well, one can ask the same question about randomness in any system and I think there are actually three basic ones. ways in which randomness can arise, the one that is traditionally talked about the most is that randomness can come from external perturbations of the environment.
I mean, an example is a ship rocking in an ocean, the ship itself produces no randomness, it just moves randomly because it is exposed to all the randomness of the ocean, it's the same kind of thing in Brownian motion, where the randomness comes from a lot of microscopic molecules bouncing around randomly or from electronic noise where one hears a lot of random thermal motion amplified in the last 20 years or So another way to get randomness that has been talked about often is chaos theory and the idea is not to continuously inject randomness into a system, but to feed it only at the beginning from the details of the initial conditions for the system, so think about flipping a coin or spinning a wheel once it starts, there is no randomness in how it moves, but the direction it will point when it stops depends sensitively on what its initial speed was and whether it was started.
Say it manually there. There will always be a bit of randomness in that, so one won't be able to predict the outcome well. There are more elaborate versions of this in which one effectively takes numbers that represent the initial conditions of a system and successively digs digits of increasingly higher order into them and from the perspective of ordinary continuous mathematics, there is some complexity in explaining where comes randomness, if you look at it in terms of programs, although it is very clear that the randomness one gets is simply the randomness one enters into the detailed pattern of digits. in the initial conditions, again this ends up being an explanation that essentially the randomness comes from outside the system one is looking at well, so is there any other possibility?
Well, it turns out that you just have to look at rule 30 here, one does not have any randomness initially, one only has a single black cell and has no subsequent input from the outside, but what happens is that the evolution of the system just intrinsically generates an apparent randomness, and what about fluid turbulence? Where does the randomness come from? From there, well, with the traditional differential equations way of modeling fluids, it's very difficult to figure out, but you can do a simple cellular automaton model where it's much easier, so remember that at the lowest level a fluid It just consists of a bunch of bouncing molecules. and actually we know that the details are not too important because air, water and all kinds of fluids that have completely different microscopic structures still show the same continuous fluid behavior, so we know that we can try to make a minimal model for the underlying. molecules just have them, for example, on a discrete grid with discrete velocities, etc.
Well, if you do that, you get a pretty nice practical way of doing fluid dynamics and you can start to address some fundamental questions and what you seem to find is that there is no need for randomness of the environment or randomness of the initial conditions, you can get randomness from the intrinsic generation of randomness from something like rule 30, well what if one tries to model fluid turbulence? It is important to know where the randomness is in it. comes from a generation of intrinsic randomness makes at least one immediate prediction says that in a sufficiently carefully controlled experiment the turbulence must be exactly repeatable, so whether the randomness comes from the environment or from details of the initial conditions, it will inevitably be different in different runs of the experiment, but if it's like rule 30, then it will always be the same every time you run the experiment well.
One can ask about randomness in all kinds of systems, such as in finance, for example, where the most obvious feature of almost any market is that the prices in it seem to fluctuate quite randomly, so where does that randomness come from? ? Some of it surely comes from the environment, but some of it is probably generated intrinsically and knowing that is important if, for example, you want to know if you can predict it well. In physics, one place where there has been a lot of discussion about randomness is the second law of thermodynamics, about which a lot is known about the law of increase of entropy, but there is still a kind of basic mystery about how the laws of physics can be reversible in our daily lives.
In experience we see so much apparent irreversibility and I think the generation of intrinsic randomness finally gives us a way to explain which is a bit of a long story but basically it is that things like rules can encrypt the data associated with the initial conditions in such a way. that no realistic experiment or observation can do. Do we ever decode it and see how to go back, well that's a little bit about physics, what about biology? Well, there is certainly a lot of complexity in biology and people often assume that it is a higher level than in physics and they usually realize that in some way due to adaptation and natural selection, but in reality it has never been clear why. thatnatural selection should lead to so much complexity and that is probably why, at least outside of science, many people have thought that there must be something else at play.
The question is what is it? Well, I think it's really just the abstract fact that we discovered with Rule 30 and so on, that among simple programs it's very easy to get complexity. I mean, of course, the entire genetic program for a typical organism is quite long and complicated for us humans. It turns out to be about the same length, for example, as the Mathematica source code, but it is becoming increasingly clear that many of the most obvious aspects of shapes and patterns in biology are actually governed by quite small programs, and looking at For example, this type of regularity that one sees in biological systems does not seem too surprising when one sees more complicated things, although traditional intuition tends to suggest that it must have been difficult to achieve in some way and that, for example, with In the picture of natural selection there is some kind of idea that it must be the result of a long and difficult process of optimization or of trying to fit into some complicated ecological niche.
Well, actually I think that's not where many of the most obvious examples of complexity in biology come from. I mean, natural selection seems to be pretty good at operating with a small number of soft parameters, you know, lengthening one bone, shortening another, etc., when there's more complexity involved, although it's very difficult for natural selection to work well and, Instead, what I think one ends up seeing is much more than the result of typical genetic programs chosen at random, so let me give you an example. Here are some Mosk shells with quite complicated pigmentation patterns. In the past one might have assumed that getting things as complicated as this must be difficult and that it must somehow be the result of a sophisticated biological optimization process, but if you look at these patterns, they look incredibly similar to the patterns that we get from the cell autometer as rule 30.
In the actual shell, the pattern is set by a line of pigment-producing cells on the surface. growing edge of the shell and it seems that what happens can be captured quite well by the cellular Automan rule, so why one rule and not another? Well, if you look at different species, you see all kinds of different patterns, but the good thing is that there are definite classes of patterns that you see that correspond remarkably well. with the kinds of behavior that one sees in the cellular autometer, so it's as if the mosks of the world are simply testing the space of possible simple programs and we can simply see the results of those programs displayed in their shells with a sort of All the emphasis on natural selection gets one used to the idea that there can't be much fundamental theory in biology and that the things one sees in present-day organisms must reflect detailed accidents in the history of biological evolution, but it does.
The Molos Shell example suggests that things might actually be different and that it might instead be reasonable to think of different types of organisms as somehow uniformly sampling an entire space of possible programs, so let me give you an example. more: leaf shapes, one might think that these are too diverse to explain uniformly, but it actually turns out that there is a simple type of program that seems to capture almost all of them; it only involves successive and repeated ramifications and the remarkable thing here is that the limiting forms one obtains seem real. the leaves are sometimes smooth, sometimes irregular, etc.
Well, here is a simple case where one can design all the possible shapes that one gets and one thing that one can see is that varying a parameter can change not only quantitatively but also qualitatively the type of leaf. one gets and therefore also potentially changing how it can work biologically well to be a little bit more sophisticated. The characteristics of possible leaves can be summarized in a parameter space set that turns out to be a simpler and more interesting linear analogue of the Mel BR set and from the properties of this set all kinds of characteristics of the leaves and their probable evolution can be deduced. .
Well, there's a lot to be said about how ideas about simple programs can be used to capture what's happening in biology, both at macroscopic and type levels. I've been talking and at molecular levels, let me go back for a few minutes to physics and in particular fundamental physics for which we don't need an immediate picture. Traditional mathematical approaches have obviously been very successful there, but they have not yet been able to give us a truly fundamental theory of physics and I suspect the reason is that more primitives are really needed, not only those of traditional mathematics but also the more general ones that they can be had in the programs. and now that we have seen that very simple programs can produce immensely rich and complex behavior, one can't help but wonder if perhaps all the amazing things we see in our universe couldn't simply be the result of some particular simple program.
It would be very exciting to have a little program that is a definitive and accurate model of our universe, so that if you run that program long enough, it will reproduce every single thing that happens in our universe, but hey, what would that program be like? Well, one thing that is inevitable is that very few familiar features of our universe will be immediately visible in the show. I mean, there's just no room. I mean, if the program is small, there is no way to fit together separate identifiable pieces that represent electrons. or gravity or even space or time and in fact I think that if the program is going to be really small it has to have as little structure as possible already built in and, for example, I think that a cellular automaton already has too much structure. built-in, for example, has a whole rigid series of cells arranged in space and also separates the notion of space from the notion of states of cells and I don't think it's necessary, in ordinary physics, space is a kind of background on which matter and everything else exists, but I think in an Ultimate model you only need space.
I don't think other basic concepts are needed, well since it could be space, I mean we normally think in space. like something that doesn't have any kind of underlying structure, but I think it's actually a little bit like what happens with fluids. I mean, our everyday experience is that something like water is a continuous fluid, but in reality we know that deep down it is composed. of many small discrete molecules and I think something similar is happening with space and that on a small enough scale space is just a huge collection of discrete points and I actually think it's really a giant network with a changing pattern of connections between points where the only thing that is specified is how each point, each node is connected to others.
In the end, there will probably be many ways to phrase it, but a simple one is to say that each point is connected to exactly three others. The points for making a trivalent network are fine, so how can something like space as we know it arise from this? It's actually quite easy, and in fact one can have networks that correspond to space in any number of dimensions. I mean, here are some examples, this is this. It is one dimension two dimensions three dimensions the important thing here is that all of these are just trivalent networks the only thing that is different is the pattern of connections between the nodes.
I have drawn them to clarify the correspondence with ordinary space, but it is important to realize. that there is absolutely no intrinsic information in the real Network about how it should be drawn, it is just a group of nodes with a certain pattern of connections, well, given a pattern of connections, how can we know if it corresponds to one-dimensional, two-dimensional, three-dimensional? dimensional or any space, it's actually quite easy, think about starting at a particular node, then going to all the nodes that require a connection to get to, then two, then three and so on. Well, what you're doing here is forming some kind of circle or sphere or something like that. and then what you have to do is simply ask how many nodes are inside that, once gone, let's say our connection in two dimensions will be approximately the area of ​​a circle PK r s in three dimensions the volume of a sphere 4/3 PK R al cube and so on and in general in D Dimensions it will be something that grows like R to D, so given a network that's how one can tell how many dimensions one is in, well that's a little bit about space, so what happens? time in the usual mathematical formulation of physics, space and time are always the same kind of things, I mean simply different variables corresponding to different dimensions, but when one looks at the programs they seem much more different, I mean a cellular automaton, for example. one moves in space going from one cell to another, but one moves in time applying the cellular automaton rule, so can space and time really be that different?
It's all pretty complicated, but I think at the lowest level they are, I mean. However, it's definitely not like in a cellular tin because in a cellular tin there is some kind of global CL clock with each cell updating in parallel on every tick, well it's hard to imagine how such a global clock could exist in our universe , then what? It might actually be happening well, here's something that seems crazy at first. I mean, what if the universe works like a spinning machine or what I call a moving automaton where at each step there is only one cell that is updated, so there is no problem with synchronization? here because there is only one place where nothing happens at once, but then how can this Poss be right?
I mean, after all, we usually have the impression that everything in the universe is passing through time together. I mean, I certainly don't get the impression. that what is happening, for example, is that first I am updating, then you are updating and so on, but the point is how would I know because until I update I cannot know if you have updated or not. Well, if one follows this to the end, one realizes that all we can really know in the end is a kind of causal network work about which event influences which other event and here is an example of how one can go from an underlying sequence of updates in this case in a mobile automaton to a Causal Network and the important thing here is that although the updates only affect one cell at a time, the final Causal Network corresponds to something kind of uniform in Space-Time.
Okay, so how could time work? In the context of the spatial networks I talked about before, the obvious thing to do is to imagine that there is an update rule that says that whenever there is a part of the Network that has a particular shape, it must be replaced by a part of the network with Another way. So here are examples of rules like that, there is an immediate problem though I mean, if there are multiple places on the network where a particular rule could be applied, where should it be updated first? In general, different update orders will lead to different causal networks and one will get a sort of complete tree of possible histories for the universe and then, to say what is really happening in our universe, one will somehow have to have more information to say which branch it is on.
Well, I don't consider it very plausible and it turns out that there is actually, another quite subtle possibility turns out that with the appropriate kinds of underlying rules it doesn't really matter in what order they are applied, there is what I call causal invariance, which means that the causal network when it comes out is always the same for those of you who know about this kind of thing, this is related to social confluence or the Russellian properties of church in rewriting systems, it is also related to the way Mathematica manages to find canonical forms for the expressions, but anyway there are conditions to avoid overlaps etc., which result for example allowing rules based on graphs like these and whenever one uses only these graphs, for example, you have causal invariance and That means that in some sense there is always only one thread of time in the universe, well, in addition to making there be a single thread of time, this configuration has another important consequence: it turns out that it immediately implies that special relativity must hold, for which is a bit of a complicated story for those of you who know about this kind of thing, let me tell you that different upgrade orders correspond to different spaces like hypersurfaces. and that is why, subject to various complicated questions of limits and averages etc., causal invariance implies relativistic variance, well, let's move on, what about general relativity?
The standard theory of gravity needs to start by talking about curvature in space. Here is an example. of a network thatcorresponds to a flat two-dimensional space, what happens if we change the connection and mix some heptagons or pentagons into those hexagons? The answer is that we get a space that sticks out or protrudes, remember that in 2D the Number of nodes we get when traveling a distance R is assumed to grow as R squ. Well, in ordinary flat space it is exactly R squ, but in curved space there is a correction term and it turns out that it is proportional to the Richy scalar curvature of So-Cal.
Well, that's already interesting enough because the Richie curvature scale is exactly something that appears in Einstein's equations that specify the structure of spacetime in general activity. The whole story is quite complicated. In reality, we need to observe curvature not only in space but in space-time defined by causal networks, but then it turns out that the growth rates of the volumes of space-time cones are related to the So-Cal Richie tensor and then , with some microscopic randomness and other conditions, it seems that you can derive conditions on the Richie tensor and guess what exactly Einstein's equations appear to be, so there are a lot of problems in the caveats, but it's quite exciting, it seems that from almost nothing a Once you have been able to derive an important feature of our universe, namely general relativity and gravity, okay, then another important one.
What there is in our universe are particles like electrons and photons, etc. Well, remember that all we have in the universe is space, so how can we get particles? This is how it can work in something like a cellular autometer. Here is one cellular automaton in particular. be our friend rule 110 we start at the top with random initial colors of the cells, but what we see is that the system quickly organizes into a few persistent localized structures and these localized structures act like particles, so for example here There is a collision between them. two particles go in and then after a certain amount of interaction a bunch of particles come out, it almost looks like an explicit version of a fan diagram in particle physics, well then how does something like this work in a network? particles end up relying on little defined tangles, like sort of non-planar pieces in otherwise flatter graphs, well, okay, so talking about particles brings up quantum mechanics.
Quantum mechanics is actually a large set of mathematical results and its constructions are certainly not easy to see. how to derive everything from the underlying theory I've been talking about, and in fact, just to complicate things further, I suppose that in the end it will be easier to derive quantum field theory than quantum mechanics, just as it is easier move from molecular dynamics to fluid mechanics than to rigid body mechanics, but we can still see some things quite easily, the way quantum mechanics is usually set up has a kind of fundamental randomness built into my theory, It is completely deterministic, but the point is that it makes its own randomness and in reality that randomness is crucial not only to give quantum mechanics but even to construct things like space and time.
Well, one would think that with the determinism underlying my theory there could not be the kinds of correlations that one needs to violate Bell's inequalities to get what is. observed in quantum mechanics, but it turns out that that conclusion is based on some basic assumptions about spacetime and, with my network setup, it's actually pretty easy to at least imagine getting the correlations one wants. It can happen if there are only a few types of long correlations. distance network connections between particles that are simply not part of the US 3+1 dimensional space-time structure. In fact, it is quite surprising how many known features of physics one seems to be able to obtain quite easily from the type of simple programs that I have been talking about and I must say that it makes me more and more hopeful that it will really be possible to find a single simple program that really is the definitive program for the Universe.
I mean, there are a lot of technical difficulties, a lot of tools that have to be built, in fact, you can see them in future versions of Mathematica, but I think in the end it will work and it will be quite exciting, well, I wanted to go back now to the original discovery that really launched everything. What I've been talking about is the discovery that even simple programs like Rule 30 can produce immensely complex behavior, so why does that happen? What is the rationale for answering that it is necessary to establish a somewhat new conceptual framework and the basis for that? is to think of all processes as calculations, the initial conditions of a system are the input and the behavior that is generated as output, well, sometimes the calculations are those whose objective we know immediately, like here is a cellular automaton that calculates the square of to any number you give it a block of n cells at the top and it generates a block of n square cells at the bottom and here is a cellular automaton that generates the prime numbers, but in reality you can think of any cellular automaton does a calculation, it's just Not necessarily A calculation whose purpose we know in advance, okay, so we have all kinds of systems and they do all kinds of calculations, but how do all these calculations compare well?
We might have thought that each different system would always do a completely different calculation. different type of calculus, so for example if I wanted to do addition I would buy an adding machine, if I wanted to do exponentiation I would buy an exponentiation machine, but the notable idea that is now about 70 years old is that no, that's not necessary, but it is It is possible to create a universal machine that can do any calculation if given the right information, of course, that has been a pretty important idea because it is the idea that makes software possible and it is really the idea that launched The entire computer revolution, although it may seem strange, is not.
An idea that's in the past had a lot of effect on the foundations of natural science, but one of the things that comes out of what I've done is that it actually has some very important implications as well, so let's talk about what it means to be. A basic universal system is basically that with proper input the system can be programmed to act like any other system, so here is an example of a universal cellular automaton and the idea here is that by changing the initial conditions you can make this automaton single cell act. like any other cellular automaton, okay, here it behaves like rule 254, which happens to be a simple uniform pattern, here it behaves like rule 90, here it behaves like rule 30 and remember that each of these images is from same universal cellular automaton with the same underlying. rules, but what is happening is that by giving different initial conditions we are effectively programming the universal cellular automata to emulate all types of cellular automata and, in fact, it is capable of emulating absolutely any other cellular automata with rules of any size that one could. to have thought. that a system would only be able to emulate systems that were in some way simpler than itself, but the existence of universality says that we can have a fixed system that can emulate any other system, however complicated it may be, in some sense, a once one arrives at a system. that's Universal, one is at its maximum from a computational point of view, one has a system that can do essentially any calculation, no matter how sophisticated it may be.
Well, what about all these cellular autometers like rule 30 or all the systems that we see in nature, how sophisticated are the calculations that are doing well? I spent a lot of time thinking about this and accumulating all kinds of evidence and what I ended up concluding is something that at first seems quite surprising. I call it the principle of computational equivalence. It is a very general principle and in its most approximate form. What it says is that essentially every time a system's behavior appears complex to us it will end up corresponding to an exactly equivalent calculation of sophistication, so if we see behavior that is repetitive or nested, then it's pretty obvious that it corresponds to a simple calculation. . but what the principle of computational equivalence says is that when we do not see those types of regularities we are almost always seeing a process that in some sense is maximally computationally sophisticated.
At first that is quite surprising because we might have thought that the sophistication of the calculations that are done would depend on the sophistication of the rules that were implemented, but the principle of computational equivalence says no and that immediately gives us a prediction, it says that although their rules are extremely simple systems as rule 30 should be. Universal Computation Well, normally we would imagine that to achieve something as sophisticated as universality of computation we would need somehow sophisticated underlying rules and certainly the computers we use that are universal have chips of CPUs with millions of gates, etc., but the principle of computing The equivalent says that you don't need all that.
It says that even the cellular autometer with very simple rules should be Universal. Well, here is one of them. This is Rule 110. I showed it before. It has a pretty simple rule, but as you can see, it does a few things. Pretty complicated stuff, you have all these little structures running around that look like they are doing logical operations or something, but we actually put them together to get something that one can see as universal, well, one day I think it will be possible to automate most of it. Of the process of discovering that, unfortunately I didn't have that automation, so I had to hire a human assistant to do it, but after a lot of painstaking work you get the result that the 110 rule is in fact universal, well, that's exactly what The principle of computational equivalent should be true, but it is actually somewhat remarkable because it means that this little rule can, in fact, produce behavior that is as complex as that of any system.
You don't need anything like a full computer CPU to perform universal calculus. you need this little rule and that has some very important consequences when it comes to thinking about nature because we wouldn't expect to find entire computer CPUs lying around in nature, but we can definitely expect to find things with rules like 110 and that means for example, it's Many everyday systems in nature are likely to be universal. By the way, in the past this was the simplest touring machine that was known to be universal, but now it can be seen that this much simpler touring machine is actually universal and, in fact, I suspect. that you can go even further and that in reality this touring machine will end up being the simplest possible that is universal.
Well, there is a lot to say about what the principle of computational equivalence is and what it means. One thing he does is make Church's thesis. Definitive in saying that there really is a strict upper limit on the calculations that can be done in our universe, but the place where the principle really starts to come into its own is when it says that not only is there an upper limit but that upper limit is actually achieved most of the time with incredibly simple rules, you will often get simple behaviors that claim to be repetitive or nested, but the point is that if one makes the rules even a little bit more complicated, then the principle of computational equivalence says that one immediately crosses a threshold. and you end up with a system that is computationally sophisticated to the max, but in reality the principle goes even further than that.
Normally when talking about universal computation, one imagines being able to set any initial conditions you want, but the principle of computational equivalent says that's not necessary because it says that even when the initial conditions are simple, they will usually still be calculations. extremely sophisticated. So what does it all mean? The first thing is that it gives us a way to answer the original question of how something works. Just as Rule 30 manages to show behavior that seems so complex, the main point is that there is always a competition between an observer and an observing system and if the observer is somehow computationally more sophisticated than the system, then in In a certain sense, you can decode what he is doing, it will seem simple to you, but what the principle of computational equivalence says is that in most cases, the observer will be exactly computationally equivalent to the system that they are observing and that is why system behavior will inevitably appear to be complex, a related consequence of the principle of computational equivalence is a very important phenomenon that I call computational irreducibility.
Let's say you know the rules and initial conditions of a system well, then you can certainly determine what the system will do.executing explicitly. but the question is if you can somehow shortcut that process. Can you, for example, simply come up with a formula for what will happen in the system without having to explicitly track each step? If you can, then what it means is that you can figure out what the system will work with much less computational effort than the system itself requires, and that kind of computational reducibility is the core of more traditional theoretical science.
I mean, if you want to determine where an idealized Earth will be in a million years. You don't have to track all of its millions of orbits, you just plug a number into a formula and get a result, but the problem is what happens if the behavior is more complex, if a system is repetitive or even nested, it's easy to shortcut the things. What happens with a case like this? There's certainly no obvious way to shortcut this, and in fact I think it's computationally irreducible; There is essentially no way to determine what the system will do by any procedure that requires less computational effort than simply running the system and seeing what happens.
Well, in traditional theoretical science, it has been idealized that the observer is infinitely computationally powerful relative to the system. that you are observing, but the point is that when there is complex behavior, the principle of computational equivalence says that, instead, the system is just as computationally sophisticated as the Observer and that is what leads to computational irreducibility and that is, in a sense, the reason why traditional theoretical science has not been able to advance further. When one sees complexity, there are always pockets of reducibility where one can progress, but there is always a core. of computational irreducibility well, I think that computational irreducibility is a fairly important phenomenon that is relevant even beyond what is normally considered purely scientific questions such as, for example, the problem of free will.
I mean, it's always seemed mysterious how we manage to act in ways that seem free. of obvious predictive laws if it is the case that our brains actually follow defined underlying laws, but I think at least a crucial ingredient of the answer is the computational irreducibility that even with defined underlying laws there still cannot be an effective way to predict what a system will do. do except in effect just running the system and seeing what happens well. Computational irreducibility can also be seen as leading to the phenomenon of undecidability originally discovered in the 1930s. I mean, look at this cellular automaton, for example, and ask the question from a certain initial condition, the pattern that is produced. it will eventually disappear or just continue forever in this case here, simply running the cellular automaton tells one that after 36 steps the pattern disappears in this case, although it takes 1.7 steps to discover it. and in these cases, even after 10 million steps, it is still not clear what is going to happen and the point is that if there is computational irreducibility there is no way to shortcut this Evolution, so there is no finite calculation that can always determine what will happen after an infinite time and that means that it must be said that in general what will happen is formally decidable.
Well, undecidability has been known in mathematics and computer science for quite some time, but with the principle of computational equivalence we now realize that it is also relevant. For the natural sciences, I mean, if one asks questions about infinite time or infinite size limits, the answers may be undecidable, such as whether a body will ever escape in a three-body gravitational problem or whether some idealized biological cell line will grow forever or will eventually disappear or if there is a way to organize a complicated molecule in a crystal below a certain temperature and, in fact, I hope that when there are things that seem so random that you just have to tabulate it in things like chemical tables, it is a sign that there is computational irreducibility at work.
Well, there's another big place where I think computational irreducibility is very important and that is in the foundations of mathematics. I mean, it may seem a little obvious, but it's really a profound observation about mathematics that, although often difficult to make, is based on fairly simple axioms - in fact, here are those of essentially all current mathematics, but a Even though these axioms are simple proofs of things like the four color theorem or the last format. The theorems are very long and it turns out that one can think of it as another case of the phenomenon of computational irreducibility, so let me show you a small example of how it works.
Here is an example of a simple proof in mathematics. These are some of the axioms. here in this case we specify equivalences in logic and this is a proof, it starts with an expression at the top then continues using the aums and finally proves the theorem that the expression at the top is equivalent to the one below, well, so a sort of minimal idealization of mathematics, one can imagine that the axis simply defines transformations between strings, so with the axis at the bottom here are proofs of some theorems, so how long should the proofs be? ? Here is a picture of three axium systems. showing the network of all possible Transformations and the way this works, each possible path through each Network corresponds to the proof of some theorem.
Well, the point is that the shortest path from one particular string to another can be really long and that means that the theorem that strings are equivalent only has one very long proof, well, when people thought about formalizing mathematics a long time ago century, they assumed that in any system of axioms it had always been possible to give a proof of whether a particular statement was true or false. so it was a big surprise in 1931 when Girl's theorem showed that that wasn't true for piano arithmetic, axium's standard formal system for ordinary integer arithmetic, what Girdle really did was look at the kind of funky self-referential statement which is this statement.
Well, it can't be proven simply by what it says, the statement is quite obvious that it cannot be proven true or false, but as such it doesn't seem like a statement in arithmetic and Girdle's real achievement was essentially showing that arithmetic is universal. so that in particular you can Code well your funky statement for the foundations of Math Girdle's theorem was a big deal, but somehow in all these years it never seemed too relevant to most of the things working mathematicians deal with and if you needed something like Girdle's funky statement to get undecidability, that wouldn't be surprising, but here's the thing: the computational equivalence principle should be general enough to apply to systems in mathematics and then it says that irreducibility e Computational undecidability shouldn't actually be rare at all, so where are all these undecidable statements in mathematics? it has been known for quite some time that there are integer equations called D Fantine equations about which there are undecidable statements here is a real example of a dantan equation set up explicitly to emulate rule 110 well this is obviously quite complicated and not something that can be solved appear every day, but what about the simpler D Fantine equations?
Well, here are a bunch of linear D Fantine equations that were cracked in ancient times, quadratic ones around 1800 and up to now. Another type seems to be cracked about every 50 or 100 years, but I guess that's not really going to continue and that in reality many of the currently unsolved problems in number theory will turn out to be undecidable, that's fine, but why has so much mathematics been done successfully without reaching undecidability? I think it's something like the IAL theory of physics. I tended to stick to places where there is computational reducibility and where their methods can progress, but at least in recent times mathematics has prided itself on being somewhat very general, so why haven't they applied rule 30 and rule 110 and all other phenomena that I have talked about and found in simple programs were shown well.
I think part of the reason is that the math is not as general as advertised. I mean, to see what it could be, one could imagine listing possible axiom systems and, for example, this shows what theorems are true for a sequence of different axiom systems, it's like a sort of finally dissected mathematical form, the axioms go down to the bottom left, here theorems go across the top and there is a black dot every time there is a theorem that is true for a particular axiom system. So is there something special about the actual axium systems used in mathematics, perhaps something that makes undecidability less rampant?
If one looks at axium systems in textbooks, they are generally quite complicated, like here is Logic, for example, well, it has been known for 100 years. It's not necessary to have those three different operators in there, the single nand operator or the cheffer hit is enough, but the obvious Axiom system with that is still quite complicated, well, from all the intuition I gained about simple programs, I suspected that I should actually There is a really simple axium system for logarithmic logic, probably with just one axium, so I searched for it and finally found it and I know that this is the simplest possible axium system for logic.
Here is the proof, by the way, no need to say computer generated, knowing this, I can say that if one simply lists the Axiom systems, the logic will be approximately the 50,000th one found, but what about all the others? Well, most of them are also perfectly reasonable Axiom systems, they just don't do it. They happen to be well-known fields of mathematics, and in fact I think that mathematics, as it developed, has in some sense been tremendously limited at some level; In reality, they are still several direct generalizations of the arithmetic and geometry that were studied in ancient Babylon.
By looking at simple programs and simply doing experiments, one immediately sees a much larger world, a kind of huge generalization of what mathematics has been until now. Well, let me now move on to a slightly different topic. I want to talk about what the principle of computational equivalence says. On a sort of big question about our place in the universe, it's always been natural for us to think that we as humans are very special, but the history of science keeps showing us ways in which we weren't, e.g. 400 years. We discovered that our Earth is not in a special place in the universe, and a century and a half ago we discovered that there was nothing special about the origin of our species.
Well, every time we lose something in the specialty, science becomes more general. because you can drop another footnote that says except in the case of humans, but at this point we still often think that we are special in our level of complexity or our computational ability, but one of the big claims that the computational equivalence principle is that that is not correct, it says that there are many simple abstract systems and systems in nature that are exactly equivalent in terms of their computational sophistication. Well, sometimes you get that impression anyway, for example when you say something like the weather has a mind. on its own, but what the computational equivalence principle now says is that yes, fluid turbulence in the atmosphere will correspond to a calculation as sophisticated as anything we do, so we're not special in that sense, well, one of The things that this has a consequence for is In the search for extraterrestrial intelligence there was a kind of idea that if we saw a signal produced by a sophisticated calculation then there would be no choice but to conclude that it came from a sophisticated extraterrestrial intelligence, some kind of extraterrestrial civilization, well, the principle of computational equivalence says no, in reality it is easy to do sophisticated calculations and many things in nature do them, it doesn't take all of our biological development and civilization to handle them and that is, by the way , the reason it's so hard to distinguish random radio noise from some kind of compressed and encrypted smart signal, well, where does that leave us?
It's an interesting thing to think about how we interact with the ultimate limits of technology. I have no doubt that there will be a time potentially very soon. when it will be possible to capture all the important features of human thought in pieces of solid state electronics and no doubt things will become more and more efficient until everything is on an atomic scale so that our human thought processes are simply implemented by individual electrons Buzzing in chunks of something good, of course,There are also electrons whizzing around in all sorts of complicated patterns in ordinary pieces of rock and what the principle of computational equivalence tells us is that we cannot expect the patterns formed by which we represent that human thinking is ultimately more sophisticated than those that occur naturally in something like a rock, that there is no kind of abstract essence of human intelligence that one can identify, but what, of course, remains special about us are all of our details and all of our history and in a certain way sense it is the principle of computational equivalence that shows us that that story can really add up to something because if everything was computationally reducible then in a sense the story couldn't achieve anything, we could always get there. the same final point without all that effort, it is interesting what the principle of computational equivalence ends up saying: in a way it summarizes both the great strength and the great weakness of science because, on the one hand, it says that all the wonders of our universe can be captured by Simple Rules, but it also says that ultimately there is no way to know the consequences of those rules except, in effect, simply by watching and seeing how they play out well.
It's amazing to me what came out of those little computer experiments I did in the early 1980s. It's been very exciting, you know? When I started writing my book in 1991, I thought it wouldn't take long, but I kept discovering more and more things. I kept looking in different areas. I kept finding all these wonderful things. Although the thing was really scary, it seemed like I kept finding things that didn't agree with the existing conventional wisdom that I had at least always believed in, and actually having the confidence to see past that was one of the biggest challenges.
People who have seen the notes portion of my book will know that I put a lot of effort into tracing the story and one of the main reasons was that that was how I finally became convinced of the things I discovered. Knowing enough about history to see why a field took the path it did rather than the one I now think it might well have taken in the 1980s, I used to write AC articles about what I was doing and I think it's fair to say that were well received, they certainly started some great literature trees, etc., but in the 1990s, particularly with Mathematica, I began to discover new things very quickly and soon had material for many dozens, perhaps hundreds of articles, and above all was starting to build.
I created quite a large intellectual structure and it was very clear that a bunch of articles scattered in all kinds of fields would not be able to communicate that, so I decided that I had to keep working until I finished and until I could present everything in a unique and coherent way. , it took a lot of personal focus to do that, but even though I was the CEO of a very busy company, I ended up working on my science every day and night for over 10 years. I talked to experts when I needed to especially about history but mostly I kept to myself and kept polishing everything until I could explain it clearly and gradually I completed all the parts of the book I wanted to write and finally at the beginning of last year I started writing .
When I finished well, one problem was that the book I had made didn't fit very well into any of the usual models that exist in the publishing industry, so I ended up deciding that it was easier to publish it through our own company, well, one of the The big question is always how many copies to print. Well, we talked to a few people and needless to say, the answers were all over the place. Some said, "You know this is something fantastic." Print many of them. Others said, "You know you're crazy." I'm going to be interested, well, in the end I decided to print 50,000 copies and on May 14 of last year they were finished and the book was officially published.
What happened was quite exciting at the end of the day on May 14th, all 50,000 had been printed. There was talk about the copies we had printed, well then things really started to happen, a huge response in many communities and in the media, some of it very sensible, some of it quite wild and confusing and really all the classic signs of the first stages of a paradigm shift. As a student of the history of science, I've certainly read a lot about paradigm shifts, but it's pretty scary to see one up close to see the kind of real dynamics and emotions of the whole thing that you never see when you're just learning the science. and the ideas years later, of course, it helps me personally to have seen some of this before because when Mathematica came out in 1988, I think it's fair to say that it also led to a kind of paradigm shift and, as always, there was a certain There was a lot turbulence at first, but gradually, almost visibly, it was resolved and now, almost 15 years later, it is as if it had always been this way.
Of course, there is still a lot more to come with Mathematica, particularly when some of the ideas in Mathematica about symbolic programming and such are really absorbed, but the nks paradigm shift is much bigger and will certainly take many years, many decades to open up. path, but I think it's on a really good path. To start with, I mean of course there's the usual kind of Coonan stuff going on, people saying this must be like X thing I've seen before or not, no nothing like this can be right or this just isn't science as I believe.
Well, it took me 20 years to accept what I discovered and so far the book has been out for less than a year, but I'm really impressed at how quickly a lot of people are picking things up and really starting to work. As for things, I mean, I wrote the book very carefully and it's good to know that there are many people who read it cover to cover several times. By the way, I might mention that if you're just diving into the book, be sure to read it. not just the beginning but also the end of the main text and don't forget the notes, the whole second half of the book is historical and technical notes on all sorts of things and if there is any particular area that you know well, you can start.
One thing that's great about the technical notes was that I was able to use Mathematica as the notation that ended up working. very good, it let me say a lot in each line very clearly and of course it also has the great advantage that you can not only read it, but you can also run it and in fact you can download all the programs in the book with examples of proof. and so on from the W from Science website, by the way, in version 4.2 of Mathematica that came out shortly after the book, we added a very efficient built-in cellular automaton function as a way to commemorate the book that was published within the book Much of the presentation is very graphical and of course all the graphics were made with Mathematica programs and it turns out that with some clever new technology of ours we have been able to take those programs and create a separate piece of software from them called nks Explorer and allows anyone to reproduce the main images from the book and then continue doing their own experiments.
In fact, it's amazing how easy one can discover new things with nkx. It's a lot of fun and there's a lot of science. To wrap this up, you know, my book is really just the beginning. I mean, it's been exciting over the last six months to see how many people have been energized to work on the ideas in it. In fact, in many ways it's been quite overwhelming and it's really made me want to figure out what the best infrastructure is to make all of this really thrive. We are already organizing some things that have started to appear on the Ulum science website.
There is quite a bit of reference material related to the book. and gradually there will be more, soon there will also be a large collection of open problems, an average of about one for each page of the book, and there will be a huge repository of information about simple programs specific to what's out there. In the computational world, you know, one might have thought that once one had finished discovering all the things in nks and then writing a book about them, that would be enough and certainly with all the extra time I've had from not being in the middle. of writing thanks, I'm having a great time right now working on Mathematica 5 and Mathematica 6 and really digging into all the wonderful things I can really do with what I and Mathematica have created.
I'm also gradually recovering from the actual process of writing, stopping being a recluse and going out to give talks, etc., and preparing to take the next steps to make the ideas in the book grow and flourish in the best way possible. in the world and build more tools to do what I love to do most, which is generate new ideas and discover new things, so okay, what's going to happen to all the things I've talked about here today? Today I think it will be a long story that will unfold over at least many decades, but I think three big things will emerge first: a new area of ​​basic science like physics, chemistry or mathematics, but concerned with understanding what is in it. the world. second computational world, a lot of applications to science, technology and other things, a lot of new raw materials for making models of all kinds of things, including maybe our entire universe, but also a lot of new directions for technology because now we have all these new mechanisms. not just gears and wheels, but also things like rule 30 which gives us access to many more things that nature can do and which also gives us new ways, for example, of approaching the creation of algorithms or nanotechnology, but there are also a kind of understanding of conceptual direction. more about the fundamental character of science and mathematics and about the place we have in our universe and, in a sense, give a new framework for thinking about things in general and a new basis for a basic common thread in education, a kind of alternative to mathematics.
There are so many possibilities, so many things to discover, so many types of hanging fruits to collect. I wish I could have told you more here today. In reality, I have only been able to scratch the surface, but I hope I have been able to communicate at least a little of what is in that great book of mine and what has at least excited me so much all these years. Thank you so much. You see all kinds of interesting patterns in your cellular automaton. Do you see any patterns that correspond to triggered events? Sorry, activated events like in chemistry, for example, the activation that you know when you jump over a barrier, for example, a rare event like that, so what you could say is. that um in this this is this rule 110 cellular automaton, for example, is something completely deterministic, but occasionally one could say that if one looks here at that um uh, that one of these types of structures, what does it do when it first collides with something. or something else is a rare event that occurs only under very particular surrounding conditions.
I mean, another example of that could be, let's take another example here so one can ask the question, starting from many different initial conditions, exponential kinetics Rel a uh. What do you observe with the exponential kinetics related to crossing barriers and things, yeah, triggered events that would be a uh, I mean, yeah, you can set up cellular automata, types of things that do that because they do it for the type of Las same reasons why things can happen with uh in Optics and places like that because they effectively obey the same differential equations at an aggregate in terms of an aggregate level, but that's not particularly interesting.
I mean, what's more interesting is if For more combinatorial reasons, you can see weird events happening. I mean, let me give you an example here, so this is a particular cellular automaton rule and this shows what happens with many different initial conditions. Usually it just dies. but occasionally, um, this leads to some particular persistent structures and here are all the persistent structures that you can get and what you see is that let's say you want, you're trying to get a structure that is um uh, let's say you're trying to get something that propagate through the system quickly, that doesn't happen very often, it only happens in this very rare case and if, for example, you look at all possible initial conditions with randomly chosen ones and zeros. the probability that you get something propagating across the screen this way will be in some sense exponentially small because it occurs only for a particular configuration of ones and zeros that is quite long and that has a probability of two to the power of negative the length de Aur, so that's the kind of thing and it's actually quite strange because if you look at different types of systems, like for example, here's another one of these, these systems, where you're looking at where some of the types of structures are located persistent that occur.
Looking at this, you might conclude that, well, yes, there can be. I mean, if we look at these systems that start from random initial conditions, you might say, "Okay,There are these persistent structures that arise in particular cases, but there will always be things that look more or less like this, well, it turns out that if you run that system for many, many different initial conditions, you will eventually find an initial condition where no, it doesn't look like that at all. nothing to that, instead the system actually produces this, huh. um, this is a fun unique U block type of thing as output, in other words, it's a very rare thing, the probability that you'll have one of these random initial conditions is pretty small, but if you have a long enough initial condition , at least somewhere you'll get one of these things and the system will be sort of taken over by that thing, um, it's kind of interesting because if you ask a kind of question in statistical physics or something about the limits of what the system does in big moments what you realize is that the presence of these kinds of unexpected things that appear prevents one from having an ordinary type of thermodynamic limit because for one of these types of systems, you see, what I mean is to do a connection to life, for example, you have to be able to see more stable situations in your simulation.
You know, we are all in a stable situation. I'm trying to say one more thing about this is um uh there's a question about um uh let me let me show you what happens in um uh this has to do with thermodynamics and um the question is pretty much there's a basic question which is If the laws of Ling for the universe are reversible, why are the things we see in everyday life so often irreversible? What it means for laws to be reversible is that one can go back and forth from a different given state. In the universe, however, we know from many types of things, if we pick up an object and let it know, we drop it on the ground and it breaks.
It's very difficult for us to go back and reassemble that object on an individual level. molecules bouncing around obeying let's say laws of mechanics, it is perfectly possible that their movements are reversed and that what they do goes backwards as well as forwards, that type of phenomenon is quite easy to capture and one of the types of systems that I've talked about this is an example of a cellular tomato that is uh um that is reversible in the sense that um uh it um you can uh has a rule that allows you, from any given condition here, to both uniquely figure out how to move forward and figure out. uniquely how to go back, then this is a cellular automaton that microscopically is precisely reversible.
The interesting thing about this is that if you look at a large scale, what you see is something that, in principle, is reversible, in practice it is very difficult to reverse. Let me show you an example here. This is an example of how to run a similar cellular atom. um starting from uh um that goes um starting from uh uh let's see down down. Here, um, there was a kind of simple state that occurred here and one can uniquely move back from that simple state, one can uniquely move forward from that simple state, but once one gets here, although in principle one can go back uniquely.
In practice, it is a difficult crypt analysis problem to figure out exactly how to return to that initial state. What I will say and then I will stop addressing this particular topic, that is what one can ask. for different cellular autometers, for example, what happens with respect to this type of randomization that we saw in that case? Typically, sometimes the behavior is simple enough that you don't get any randomization, it just repeats periodically. By doing the same thing, you often get what is a kind of typical second law of the behavior of thermodynamics: although you start from something simple, you quickly get apparent randomness and what De generates in a kind of motion becomes heat and you get all.
This type of irreversible randomness. Well, one thing that one can do when one starts looking at these simple programs is to be a little more explicit about how this type of randomization phenomenon works and it turns out that, interestingly enough, there are some cases. where you don't get the kind of uh uh, you don't get something that just stays fixed or becomes periodic or you don't get something that shows the standard second law of Thermodynamics kind of randomization, instead you get a fun kind of thing which appears to have essentially infinite transient length and contains many small pieces that actually act as metastable states, the usual interpretation of a sort of entropy increasing LW does not seem to apply to this particular system, it seems this system continually generates small metastable pieces that last forever and it may be that something like this is a reasonable model of some kinds of things that happen, for example, in iCal biological systems and the way they happen at least for a while.
For a long time they seem to evade the second law of thermodynamics, so that's a bit, at least if you hear a lot of examples of complex patterns generated by programs, say, different types of leaf patterns. For example, is there a way to explore the space of simple programs that would have given rise to the leaf pattern, the complex pattern? Is there any guide you can give? That's right, so the question is how do you find which one? program is what a phenomenon can occur, how can you find a simple program that reproduces that phenomenon? I'm talking about the traditional type of statistics and model fitting, etc., it's very geared towards fitting parameters, fitting functions to the behavior we're asking for.
This is about tailoring programs to behavior, unfortunately there's kind of the bad news about that, which is that it's fundamentally difficult because even given a program it's very difficult to know what it will do and, given that fact, work backwards from the phenomenon to the program. it's inevitably very difficult now, in a sense, you know what, um uh, so it's not realistic to have a systematic machine that takes the phenomena, mashes them up and finds some programs and it's interesting to see the various things that we do in perception and analysis, Whether it's for visual perception, whether it's for data compression, whether it's for crypt analysis, all these kinds of things, we can ask what those things mean, how far do those things go in taking a phenomenon and sort of finding things through? from that and the answer is that they don't go very far, they do well with repetition, whether it's transforming frequency, spectra, things like that, some things work with nesting, for example, lle ziv compression works with searching nested patterns. and things, some forms of crypt analysis do that too, but finding how to break down these other things, that's not something those methods can do and in fact, I claim it's fundamentally difficult, I mean, you can even look for example, In mathematics, you can ask the question: what kinds of things can traditional mathematics do?
So, for example, if you have some kind of periodic pattern, it's very easy to reproduce. That with traditional mathematics, I mean color. of a cell at position There is a simple procedure involving sequences of coordinate digits or, in fact, something else you can do. I mentioned uh, well, another thing you can do for these nested patterns. It turns out that at least the simple nested patterns, that very simple nested pattern at the top are binomial coefficients. mod 2, even the nested pattern at the bottom, which visually is not much more complicated for us, turns out to be given by the Gau polynomial mod 2 and even for the nested patterns it is not very clear mathematically, there is no analysis technique mathematician you provide you have the kind of generalized things that you need for that and in fact you can see other kinds of things that you could see, for example, represent these things in terms of logical functions and you have the same kind of problem, so I think there's some kind of general result that you can't get, that you can't systematically go from the phenomenon to finding a simple program that fits it as a practical matter, what can you do? to do it right, for example, one of the things that the first thing you can do is have some intuition about what simple programs can do so that you can get a sense of whether searching a space of simple programs exhaustively searches among a a billion programs or something could actually find one.
That's relevant, the second thing is that, as a practical matter, what I've been interested in doing is making a kind of giant atlas of what simple programs really do, something like what there is in the computational world so that one can Choose that, collect the things from that Atlas to know if the phenomenon one is looking at is really something relevant. It's kind of an analogue of something like an organic chemistry database where, instead of chemical compounds, you have different simple programs and you ask what all the properties of all these simple programs are and then you wait for that or you wait for that when one has a particular phenomenon that one wants or a particular engineering problem that one is trying to solve. one can go to this Atlas and potentially find something that is relevant

If you have any copyright issue, please Contact