YTread Logo
YTread Logo

Wolfram Physics Project: Relations to Category Theory

Mar 18, 2024
curry business Howard because you know we have it, for example, in our language we would have something where we can do a correct test, so we have I don't know well, let's do this, let's do a very simple case, what is a good example, let's take a Random example, actually, you could have an example here. Look, then, let's do it, let's do it as a group. Group

theory

. Well, then these would be the axioms of group

theory

. Well, then I have, so I don't know how we think about this categorically, but I could totally say. You know I could. let's say maybe you can explain to me that this would now be a representation of a proof in so this is a representation of group theory this is a representation of a proof in group theory so okay, is there a way to understand it?
wolfram physics project relations to category theory
So one thing I can think of is each of these pieces here, well, maybe Jonathan, who wrote this code, well, yeah, I mean the Curry Howard isomorphism in the context of fine equational proof, just It states that each test has an associated test function and each test function can be interpreted in the test, so if you call a fine equational test, instead of requesting the test graph, you get the test function, you get a symbolic piece of Wolfram Language code that we can execute just fine, so let's take a look at this. I'm trying to figure this out, okay, so there's a test function so you can then run it.
wolfram physics project relations to category theory

More Interesting Facts About,

wolfram physics project relations to category theory...

What happens if I run it? I'm just saying it should be true. I just apply it to nothing, yeah, that wasn't a very interesting function, it's quite an exciting function because it verifies that that proof was actually correct, yes, but my point is that my point is that the direct search equation or the proof interpretation The most surprising direction is that each symbolic piece of Wolfram Language code can have an interpretation as a test, as a test object, which is the code itself or is it the execution of the code in some way. is the code itself, so you're saying I could imagine something, I mean this is where I get very confused, so let's say I write something like this, what is its associated test object.
wolfram physics project relations to category theory
I mean, that thing just sits there, like that, at that. If you can, you can treat that as proof that that particular thing can be associated. The calculation stops. That is a trivial interpretation. The evaluation F of G of X Y, a comma Z is in that evaluation ends and the expression you just generated is the. proof that that's true, so if I write something like this, let's say I write 2 to 2 plus 3 to 4, okay, let me do something less trivial um/oh, I don't know x 1 + 6 x 7, okay then that trace will show me the sequence of operations that took place to arrive at the result.
wolfram physics project relations to category theory
Is it relevant to anything? Should I think about that? What I mean is that what you mean by yes, so those would be, you know you could. interpret those as intermediate steps, you know that each of them can be treated as in the cut again in the language of equation or fine proof, each of them could be treated as the statement of a substitution lemma with the particular symbolic transformation of one to the other is the application of a rewrite operation and then that is a sequence of substitute lemmers of substitution that constitutes the proof that that particular evaluation ended well, so what is Curie Howard's statement on all this as I said , I'm serious? the fabbrizio probably has a more general interpretation, I mean there are all sorts of formulations in terms of dependent type theory etc., but the direct fine equation of a test formulation is exactly that every test object has a test function associated and that each code fragment can be interpreted this as a test object each code fragment can be interpreted as a test object so that the test is the test then each code fragment will have an input and I have an output , I say two plus six and wow!
I do it if it were an input cell. You could say something like two plus two, it will come out as four. So are you saying that when you say code snippet you mean code? two plus two has the type of proof two plus two is four is like that, right, right, so you're saying in the language of a kind of

category

theory something that you could be saying that two plus two transforms into four in some way, is that so? is that Islam exists, there is a reason why it is difficult to understand these forms because you are thinking in terms without thinking in types, so yes, thinking in terms of a typed programming language would be much clearer, yes, okay, So Tali, can you? give an example, I mean, I'm not claiming the particular particular, so mine and I think there are different ways of looking at it, but my understanding is that if you can hack a compiler, run the program, then the fact that you can do that is like a witness that the types make sense and that corresponds in some way to a test about the types that are more or less correct.
I'm not an expert, but that was my intuition, so if you're proving something arbitrary. theorem the claim is that the proof of that theorem is equivalent to the claim that the execution of a program produces a result of a particular type in the case of a test function here, that's why I gave the example of a test function, you can think of this is that you know that the possible results of a test function have two types, so be human, then I will take a true and a false, let's say if you want to avoid Ihram, okay, then you have to say basically the what do you have. a statement that says type of hypothesis implies type of thesis and the idea is that basically this implication tells you that as soon as you can produce something of type hypothesis then you can produce something of type thesis and curry is saying that this corresponds to a function or a proper functional program that goes from hypothesis to thesis and now the potus and this corresponds to data structures, so yes, but this is a very abstract way of saying it, maybe we could say something like you can embed a species of testing in a program in a particular way, for example, if you can run the program, you would have verified that the test is correct, but not correct, well, I mean correspondence, yes, if you have a program that tests something and that is compiles, then it can I say that's the proof that we need to understand is that when you talk about proving in this context it's a different way from the standard way of proving things that maybe we learn to prove things in the classical way, but the Chi The Howard's isomorphism really works when you reason constructively, so proving something means constructing a testimony to the truth of that statement exactly, you are proving that a particular life is inhabited, but in this particular case, yes, if cut if you interpret true and false is an elementary type it recites it with respect to this true function then the Li output in the result of this test function demonstrates exactly the particular type in this case true if it is inhabited yes you can add even more complex like call types, yeah, sure, but we're held back by the fact that we don't know a natural typographic structure and the Wolfram language, something like that, the tracking functions are facing outwards 204, that's something really very close to this Curry Howard way of thinking because There is also something that is really integral to this interpretation, the concept of term normalization, so the calculus is actually a term normalization, which means that from a logical point of view you have terms that are like syntactic expressions and you want to prove that two things are equal, so your long theorem statement is equal to true, for example, and the way you do that is to build the proof tree with some rules of thumb and these things are actually what we call normalization This correct term, which is what's worth saying, is exactly how fine the equational proof works internally, so what you're doing is you're taking some arbitrary abstract rewriting system, you're applying a kinetic completion procedure to it, and then it's like and then it's applying those whole rules. to the left and right sides of the input expression, you know, I didn't finish them until both sides were finished to normal form and then it does the trivial kind of equivalence test between the normal forms, so useful.
Would it be as useful to me and maybe Stephen here in the same poet is like a really simple example that would illustrate this correspondence in action like if you knew this function of integers of two an argument of a list of integers and an integer a this other thing like the fact that you can compile and run this and you can find that an inhabitant of that type is not an exercise in testing or verifying a test about something else, as an example, would be really useful because, in a way, the abstraction is not like that.
It's interesting to me unless you can base it on an intuitive example. Why is the test function example not a reasonable example? In that case, well, let's have a simpler version of that. Can we have a really simple version of that? do it, you can do how to find an equational proof a is equal to C of how a is equal to B B is equal to C well finding a presence improves in many cases of how a is equal to C you have given the list a is equal to B B is equals C is the simplex example that I can think of and then what I say I want what I want for that is the test function for that yeah, what the hell is all that?
So ignore the variables, those are a bit ugly so we are starting with axioms 1. & 2 so we have a equals B B equals C and we have a hypothesis a equals C and then we are making a map that replace B is going to a can you slow down a little bit sorry okay sorry so we have two axioms a equals B B equals C that are represented there are the first two lines correct and we have the hypothesis that we are trying of testing which is equal to C and then doing this is a function with no arguments right before we get to that, yeah, yeah. exactly all test functions are completely argument-free, okay, and then I mean you could treat the hypothesis as an argument if you wanted, but that's not how we represent it.
Can you guide? Can you guide? How are we supposed to interpret? This feature, where are you leading us to understand this particular feature? Okay, if you take the test object here and request its data, if you could request the test data set, okay, say there's a data set representation of that. proof, so here you can see that we are starting from those two axioms a equals B P equals C trying to prove this hypothesis that a equals C, so there is only one non-trivial step here, which is what you have What to do, I say non-trivial in a generalized sense, so to deduce a is equal to C, you take input vaccine 2 and in position 1 apply axiom 1 which cuts and then if you expand that little proof first, even then you can see that axiom 1 in orientation 1, which means that from left to right becomes the replacement operation, B goes to a because axiom 1 was B equals a and then you get the output does not apply that transformation and you get the output expression a is equal to see the point of the proof The feature is that you don't have to search the word to find equational proofs, if you don't want you can just reference, you can represent that purely in terms of replacement operations from matpat and then just run the test and see that I actually get a tautology, so let me frame this in a certain way, so what this is trying to prove is trying to say that you can deduce a C equal just using these axioms, that there is a purely substitutive way of obtaining an equal.
C by simply substituting these axioms you don't have to do any you don't have to know Fermat's last theorem you never know any of it all you have to know is structurally how we put one thing next to another yes, mainly yes, but in the context of the Korean word, you can also do things like taking pairs and other very elementary constructed things, but yes, you're basically constructive, you mean "good", so in my opinion what you're doing is literally literalizing the pattern. variables, you know you're taking something that would be a pattern variable and you know a variable bound in some lambda or something like that or a pattern variable in our world and you're saying you're converting say the pattern variable is now equal to such or which particular thing, that's an operation that you're doing and you're doing purely, so it's kind of a substitution operation, now what other operations can you do?
Basically, you are allowed to do all the operations of the lambda calculus, that is, take pairs of two objects, so if you have X and y, you can consider a pairing of x and y by pairing you mean lambda by pairing, I mean forensic, comma and parentheses, yeah, how do you construct that? I meanthe data structure as a double or yes, make tuples as a data structure, yes, but how do you do that with lambdas? So are you doing it with lambdas saying something? so if I type yes if I use the written lambda calculus again in an unwritten lambda calculus it would basically be yes, what is it?
What exactly? So, a lambda calculation without writing you would have how to do it. This is a Mathieu question for no one else. As? Are tuples made in lambda calculus? I don't think you can. There are actually many ways to make tuples in lambda calculus. One of them is one of them, so the basic idea in untyped lambda calculus is to simply use a function that takes. an input that basically chooses between two different things, so the function counts counts, it's basically the idea of ​​a conditional, so you have something like a conditional and then you put it in the input, the input chooses between one of those two things, so you can think of that conditional expression and as a tuple, but it's not purely structural, it actually has to compute something right, yeah, in lambda calculus, right in Land College calculus, everything is a function, so the tuple is a function, yes, so I enter.
The lambda calculus is the only one of a kind so obviously if you take tuples the tuples will have to be of that type so yes you can create it this way but the reason I say there is no real way . doing so is because it doesn't even make much sense to consider them, I think I probe the casing everything as a type and then the idea is that if I have something of type a and something of type B I want to be able to consider something that you know they are pairing as if they have type a times B, but if for all that it's a specific type constructor to make tuples like that, yeah, how to have that, I mean it's for each of the base types that you need to have this or each. possible type pair yes, you are in a type, can you know the type?, do you know the Titan construct, the typed lambda calculus.
They both have this idea that a particular lambda of one type can return a lander of some other type and they have this idea of ​​making tuples like that, yes, that's right, you have other ideas. I mean, yeah, you also need two basic types which are the unit type, which is the type with a single term, basically it's true and I think you should do that too. has the false type which is the type that is not inhabited - this has nothing when you say that the true type is inhabited you are saying that that type is just that something exists there is like that, is that what the type is just a type that says that this is a type inhabited since something exists of this type is that true, yes, that type is true as exactly a term that is true, that basically means truth, there is no other way to like it, it is a type that It has only one thing, okay, sorry, just for your previous question it is not the cartesian product in lambda calculus.
Isn't lambda just a function B dot function to B if you can translate that to an orphan language? I don't know what point what is that what would that function be just the point and the lambda calculus I mean, you mean what is I just want to say that the point is the syntactic form that you know that separates the bound variables from the correct body, exactly, but then What is that in our case? Are you saying that the thing would be a function of is that the pair doesn't really make sense if you don't provide a way to extract the two components of your pair, so it usually doesn't matter how you find a way to encode a pair, I think it's pretty suggested and then below the

project

ions and then access. the first of the second term of the pair you use the

project

ions, yes, you can't cut the con, you simply make as lambda pair of lambda a point B, any point, whatever you want, you mean you have an explicit function, do they work like that? wrong, I'm pretty sure that's holding on to me, but I'm pretty sure it's the cartesian product, just to get some flow here.
I mean, I think one of the things, I mean, we thought we figured out a little bit about how. Our

physics

models may be related to

category

theory, but I think we're still here worrying about getting a sense of how you guys think about category theory, how you should think about category theory, um, and something I think which is still charming. Go ahead, I can introduce it, so this is from someone who is not really a practice thinker. I mean, I'm interested in category theory and I've learned a little bit about it, but my kind of idea of ​​what the value of category theory is is not so much in producing any kind of particular calculus result or anything like telling you. something deeper and more interesting about a topic than a very detailed area of ​​mathematics that you really knew well was more the type of environment to notice and understand that many of the types of processes that occur in an area of ​​mathematics are the same type of person, and if they remain as they are, they are mathematics if you choose the right abstractions, in other words, metaphorical things happen everywhere. different areas of mathematics, but when expressed in the correct language they are identical and it is not only that they resemble each other but that there is a very precise sense in which they are the same and each three highlights that latent structure that is there everywhere and so it is a sense and in a certain sense it is an abstraction, it is a kind of effort to formalize abstract procedures that are used in different places and somehow bring them to create a common language to understand them and think about them. and point out in all the different areas where they happen and there can be many ways you could do it, so one is you can talk in a way that people know what you mean, you can talk about this abstract procedure as a coda. something mental people will know what it is regardless of the particular instance of that in an area of ​​mathematics, so that's one thing you can do in a very category another thing you can do is you can be inspired to try new things in a particular area. area because they are natural things that you can try when you have received education and all kinds of categorical tricks that people use in different areas, so it is a kind of tool for thinking, it is a framework for observing the ease of mathematics for entrepreneurs. unify certain parts of them to think precisely about algebraic structures and other structures and in that sense it is like a not quite it may not have its own content in reality everything it does is really about abstraction by itself and it can be valuable, okay, look, that's what I understand.
I mean, you know give an analogy in the original invention of logic that could be thought of in the same way that people had all these different arguments about, you know, this could be. followed from this and so on and then Aristotle and his friends essentially invented this form of all these different specific types of arguments about going to war or eating different foods or something like that and they said that all of these types of arguments fit into this pattern that corresponds , say, to syllogistic logic or something like that and what you're saying is that there's something analogous that you're doing for argument forms or building structures or mathematics that have to do with products like what a product is.
I mean, it's a very general thing and it turns out like this, all kinds of different products that you would find in different places with their products of six or you know, functions, vector spaces, whatever, but really, if you look at them in a categorical sense They are all the same, always the same, yes, a very particular structure. Okay, I can give you a very simple example of how this works. Yes, you can view the products of sets of topological spaces of groups for whatever you want and whenever you need to. defines a way to do it, but if you want to abstract, you know what is the real idea behind a product of a product, the essence of a product is that if you give me two objects of any type, the product must first have a pair. of projection functions, so I should be able to go from the products to the two components somehow and the second essence is that if I give a function to the first component and a function to the second component, then there should be exactly one way to take these two things and go to the product, which is what we usually do in set theory by saying we define these things by components so you know we say that F product G is defined on essence basically is that I can take these things and package them all together in a diagram that I'll show you for a second just to write it G and by the way, that's the universal phenomenon that a lot of things like. the kind of mental work is just looking at our hands and saying this diagram commutes, you see this, yeah, so the idea is that if you give me a and B, the product must be a multiplied by B so that it has these two projections and More importantly Still, every time I give you an F and a G from some C, there is exactly one unique way to elevate the product and we do it through components in bugs or bug races, but this is categorically packaged in the notion of limit and in that point I can say that I have a unique tool and I can say oh actually this very strange construction in this super strange category is exactly the right notion of the product in this context because it satisfies this diver, so in this sense now I can link something very complicated in some category with something very simple as a product of sex saying that they are exactly the same in these two different contexts.
Okay, so let's take an example. Here, that would be fine, those are two different approaches that I can imagine in our world of thinking about

physics

, etc., so I'm sorry, but just before you can, can I express a slight disagreement with the idea that? I think category theory is something purely bureaucratic from the point of view of our projects. There are at least five areas where I think I might actually have important things, like non-trivial things, to say about the nature of those areas. um, okay, I mean the non-bureaucratic aspect of category theory is the mechanics of these diagrams, etc., is what you do with these diagrams correct?
I mean, what is the non-bureaucratic aspect of carbon in the bureaucratic aspect of category theory? I can define things in a category, I can summarize them into abstract things, and then I can transfer them to other categories. I think that's the most important thing because imagine I have a category that contains products and the definition of products is just horrible. It's something horrendous, you would never think of it in your life. I would never go look for this and find out that maybe it's valuable in some way if I didn't first have this abstract definition of product, so, for example, Jonathan, I mean, you know the fact. that we're looking for things like black holes and branching space, yes you know it starts by looking at black holes and physical space, but we have an essentially mathematical structure for real branching space that is similar to ours for physical space, so the obvious.
The question is what is an event horizon in a branch of real space? That's it, I mean, there it is. You have a very simple and straightforward example of an obvious elevation of the concept of a space-time causal graph to the concept of a multi-way causal junction, but that's for example, one very general thing I was interested in asking Rizzio about is that You know that one of the general problems that we have in the study of these formalisms is the notion of going between the type of representations and reality, so to speak, so if you have it, if you take a causal space-time graph, It knows that our idea is that it represents something, it is a representation, it is a skeletonized version of the conformal structure of some other entity and it multiplies it.
You know, in the idealized case, now category theory has developed all this Tanaka machinery that screams tanake duality in formalism. This type of generalized way is to use the enriched DNA dilemma to go from known representations of an object to the object. and vice versa, and you know, a question that I think is obviously worth asking is whether there's a possibility that we could make use of a sort of tonight form to translate between things like the space-time causal graph and the Lorenz er manifolds and if So, as you correctly say, we can get an immediate idea of ​​what the continuous limits of something like the multi-way causal graph would be, so that the Penrose differentials of the topological techniques you first developed to study the Red Sea and varieties that a combinatorial representation of those of that structure is exactly what the causal network is, so Pet Penrose is the differential topology methods that give you the notion of causal precedents, chronological precedents in Horace for any event. space-time, it turns out you know every one of those. it's just a partial order, you can build the diagram for that and you get a network and then in thedifferential topology there is a business or if there is a technique that allows you to go from that combinatorial structure to the real Lorentzian manifold and Coming back again, then what is the obvious question: can we represent that in something like a Qian duality?
If so, can we generalize it to the top? The correct way to do it categorically I think would be that, as you said, Penrose basically gives you a position. for each manifold, basically I think you get a functor that goes from the category of differential space-time manifolds, whatever you represent these things, to the category of posts, what you want is a category that actually has more structure than in the publications category. Because you also want to include combinatorial information of some kind, you want to make sure that everything that matters to you from your computer's point of view is represented in this category in the correct way and then what you want to demonstrate is that you have an adjunct between the category in which you represent space times and the category in which these varieties need these community structures and if you have a union, that basically means that yes, you can come and go any way you want, basically yes, then it is a Un interesting point was the poly wire over the junction earlier, so I'm wondering if some of the things you've done with Petri nets, right, yeah, this is the fact that you get an injunction between, say, a net of Petri and an anise, you know, symmetrical. monoidal category that actually haunts you in some ways, right, that eliminates some interesting categories from the Petri net representation, well, I mean the problem there is a little more delicate and it's mainly that a pattern net doesn't do it. does in a pattern attack transition don't actually distinguish the order of the inputs you know because transitions are defined as missing outputs as multiple sets, so the idea is that if a transition takes it from a and took it from me or took it from B and took them from a is exactly the same thing it's a graph, you can permute the places and nothing happens right in a symmetric category, in fact your morphisms distinguish between possible orders of tokens, so you know you have to say after the morphism removes the token from me and the token from B and this is different than saying that token is a child permission in practice, what I am trying to say is that Packer networks are defined using multiple sets that are unordered lists, while similar moral categories are defined using lists of strings of things that are ordered and So a lot of people have tried to demonstrate a correspondence between these two things because it really seems like the pet network presents the S&C in some way, but because you have this difference that in one case things are not ordered and in the other they are.
Basically I need to do a lot of weird things to make it work and by make it work I mean find a court order and the reason, yes this really hurts you when you look at the implementation of the practice and stuff, but why are you trying . to get a command between two things that seem very similar but are actually not similar enough to be together. Junction does something good in the way we solve this problem. I'm in the process of writing an article with John Barr as Jade's teacher and Mike Shulman. On this, we find a more relaxed version of patronage, a more general version of patronage that is a command with symmetrical moral categories that works well and that also has links with all the other types of Petri nets that have been considered in the literature.
So basically we found this missing piece that now makes everything consistent and work the right way. So this poem, by the way, I think I'll post this in a few months, but great, the calories they don't have. be strict yes, normally literature again, all this, the Academy is always strict, the reason why they are always strict is because you don't really want, I mean the attendance, don't you know? There is no order if I say transition. u and fire at the same time, it doesn't make any sense to say that U and V fire and then W fires or vice versa, so in this sense you want fitness to represent the coin.
I think that's probably the only indisputable point at the moment. moment in all the passionate literature that talks about Patrick, sorry Jonathan, yes, I was going to say that this is one of the things that really worries me because I am very enthusiastic about using some kind of tanake and formalism to go from our representations that the actual geometric structure, the metric structures that they are supposed to represent, one of the things that worries me is, for example, at a trivial level, you know, our hypergraph models are unordered lists of ordered lists that suffer from exactly .
The problem is raised by the old Petri net formalism, so I'm a little worried if we define, say, a union between the hypergraph and some Romanian variety where they are exactly the same way as in the case of the petri net. Petri, whether Actually, we're going to lose some information by doing that and, you know, what we think is the object that represents as a Romanian variety is actually not close enough to the real object and I'm like that. I'm very curious to know what is what is the general if you run into such a problem what is the general way to solve it the general way to solve it I don't know if it's general enough but basically A standard trick you can use is to code the lack of ordering as an action of some kind, so you're basically saying: Not only do I have a Romanian variety, but I have a Romanian variety with some action on top of it that somehow permutates things, etc.
The idea is that when you link your hypergraph to this thing, the action makes sure that you know how to classify and keep track of all the possible orders without you having to do it in other ways, which you can obviously relax or make more requirements. saying that my hypergraphs are actually progress with something or my varieties are not exactly Romanian but they are Romanian plus something else, these are really the settings that depend on your problem, but in general you can see for example what people have made. with nominal sets, which I actually think is a similar problem in some ways, so you know in lambda calculus you have alpha equivalents, you can always say if I'm a new variable, I can rename a variable with this other thing and nothing changes the problem. is that this is a lot harder to do when you actually know formal verification stuff and basically people started thinking about a machinery, a theory of types that automatically keeps transporting all of these possible substitution businesses that you can do, and so on.
Instead of basically saying oh yeah, there's a high prevalence but you never explicitly model it in your formalism, you actually do, you actually say it's okay. I'm actually considering now not just sets of variables but sets with permutation actions on top of them so that each variable rename just amounts to, you know, clicking, pressing a button, applying this option to the set and this It automatically reverberates to my old theory and everything is taken care of consistently, so that could probably be a solution. I don't know your problem in detail and I'm sure not an expert on differential topology of differential geometry and stuff, but yeah, I would say that group actions and actions in general are probably a good way to deal with the permutation of things you have So follow me, maybe we could just just for context, let's talk for a minute about Petri nets, okay, and then I mean because we try to understand the correspondence between these different types of things because I mean, for example, in our system , you know, Petri nets are often considered a model. of concurrency, but obviously we have a concurrency model, you know, in our multi-way graphs and things that we're also dealing with, you know what we're dealing with and I'd like to understand the correspondence between these, so by context, I mean.
Petri nets were originally invented, as I understand them, as a kind of mathematical idealization of things like chemical reactions. Yeah, so you can know several different things that come in and you know that this type of molecule interacts with this type of molecule to produce. this type of molecule, all those interactions can occur asynchronously and you are asking the question but you are moving, you know how to count this number, you know the type of molecule A, the type B molecule becomes a type C molecule and so on. successively I mean it's it's like that so I mean, I guess I've always had a hard time understanding myself, maybe you guys now have a better way of thinking about what a Petri net really is, but I've always found it to be something that has too many definitions to sound convincing. , so to speak, has too many different pieces of its structure.
I mean, you know, it's not like a finite state machine, it's very easy, a very easy description of what a finite state machine is. Petri net is like well, you have these transitions, you have these states, you have these tokens, another complication. I understand your point. I spoke to the researcher in Japan a couple of years ago, they remember her name and what she was doing. Research Matta's theory now and she told me basically what you're telling me now if she really didn't like Petri nets because the reason I think she called topological uniformity, you have places and transitions, there are basically different beasts that you know. coexist in some way whereas in a final state machine basically everything is topologically the same, so you can actually think of transforming state machines as transforming graphs and with patterns is obviously more complicated.
I think the main difference that acclimates Petri nets is interesting is that you can see them as a calculation of resources that you can really like. The fantasy machine is a bit detailed and tells you what state you are in and what you can do in that state while you are in a pattern, you actually know how many of each type of resources. you have it, for example, on that page, I think it's Wikipedia and one that's up there, yes, you can interpret this diagram by saying that yes, you have a resource of type p12, tree resources of type P and a resource of type P for these it can be P. 1 P 3 and T 4 can denote molecules, so you can say I have a water molecule, two hydrogens and whatever, and then these transitions tell you how all these things combine, so I can ask a naive question, so this is a string. ok rewrite system ok and I mean this is a trivial string rewrite system where I'm just replacing B a goes to B ok what occurs to me is if I were to take these string elements if I were to do Let these strings be commutative, then these strings are not ordered, they only have counts of A and B.
Do I really have a Petri net here? In other words, am I okay? So what we've defined is something that says: you know, in this particular case we're saying that B a going to B is true, so I'm trying to understand to what extent these states are. There is a question: are these states like the places in the Petri net and are these events like the transitions in the Petri net? I mean, in what sense is this difference? So this is a string rewriting system. Let's ask the very basic question: how is a string rewriting system different from a patrick net system?
But from a theoretical point of view, I think you can identify a family of languages ​​that are parsed by a string reactivation system, so yes, but that's not the most useful thing. I mean any language, if you have it. I mean, there's a hierarchy of different types of string rewrites and if you allow it, I mean, you know if you allow arbitrary strings. rewrite, you can parse any language, yes, but that's an acceptor, that's not what a petri net does, it's not an acceptor either, a petri net is a transformation, not an acceptor. I mean, language analysis usually says that there is a set of possible inputs, which ones give True, they analyze some that give good so that they don't, but you can also express Petri Nets in terms of languages ​​analyzed and what you will get is something which is more powerful than language parsed by the best state machines and less powerful than a Turing Machine, so if you can parse every language with a string system, then - but we're not thinking of this as a puzzle, we're thinking which is a generator much more like a Petri net, we're saying you feed three B's and two A's are fine and in this particular case and Jonathan or someone maybe we can figure out how to do something where it only matters what the number is of A and B, it's probably an easy way to do it right, I mean, without having global events, no I don't think there's a way to do it right, you mean sorting, you can't know how many A's and B's there are in the whole chain Unless your event takes up the entire chain enough, but there's certainly a way to think about it.
I mean, if it's not aRight chain system, it's another thing you can do this with counters, basically, why couldn't we have a multi-way system that does exactly what this Petri net does? We absolutely could be. So right now our threshold for an event application is pattern matching, but if we also had a threshold that was effectively pattern matching plus a you know plus a minimum number of tokens accumulated in a particular region, then You know that would be pretty easy to set up, why are we thinking that? Imagine why it's not just this that I'm just trying to understand because the difference is a big difference between the systems that we're dealing with and the systems that you know, a lot of systems people We've talked about in our systems, you know, we say that there are string rewrites, etc., but we say this happens 10 to 100 times, whereas you know someone is drawing a Petri net that represents something that you know in particular is happening in a block. chain somewhere or something else and you say this is a representation where each node in that diagram means something, whereas what we are saying is that there could be 10 to a hundred nodes and all they mean is that they are different atoms from space. so to speak, they're not, each one doesn't have their own, you know, life and times, if that made any sense, like I think in Petri nets, I mean, okay, when you use Petri nets, do they?
Do you use them en masse? Do you have a Petri net with a million entries or is that not the kind of thing you think about? No, especially on the state books right now, we're not thinking about them this way. This is what people do in chemistry. and you get very quickly to what are called stochastic Petri nets, so what you can do is if you're imagining outside of that network and you know you have a lot of tokens, like millions of tokens, and you're no longer really interested in a precise mark , so what you do is instead of talking about tokens, you actually give concentrations to places to convert real numbers, I mean, these convert them to real numbers and then basically a patch network will give you some kind of system dynamic, it's just a differential equation I mean, it's an iterative map or a differential equation.
Yes, exactly what we do in the state books, for example, is different because we are actually representing much more deterministic processes, so, for example, our Peck network can represent the ticketing process that someone has to go through to that you know there will be a starting place where there will be a token that means nothing has happened yet and then there is a transition of these by ticket and then maybe another one wants to say that it is very good and whatever and then the idea is that to each new user simply instantiates a new Petri net that will track the status of a user's ticket, so for example, while working with this ticketing company, the idea is that we define a network of Petra that represents the life cycle of a ticket and then if they have to sell them, let's say 50,000 thicker ones, then they literally create 50,000 instances of these spatulas, and each one represents the stage that that particular unit is in, I mean , it's a modern version, it's one of a variety of types, you know. systems that you can use to represent these kinds of processes happening and presumably the importance is that you can potentially do tests based on that Petri net structure.
Yes, the advantage is that at least this company uses our system as a filter, so the idea is that if your ticket is in the used state it cannot be resolved, for example, and when I say it cannot be resold it is because it literally does not There is no transition out of the burned place, so in this sense this gives you a very disciplined way of knowing what actions are possible in each state. I get it, but let's imagine you have something like a Petri net that represents the posterior set of all events that happen in the physical history of the universe.
Okay, but what's not the kind of position because in your sets of posts and by the way, I want to say that I think conceptually this is very similar to what happened in creating our general model for physics. Well, then I think about your Petri nets when you think about Petri nets in every place, in every transition. The Petri net has a name that is doing something meaningful to you, it's a ticket being shredded or a person, you know, getting on the train or whatever, okay, yeah, it means something good, so similar for me, when I think about symbolic expressions when I write a symbolic expression I expect every part of that symbolic expression to mean something it's a it's not just an arbitrary F it's an F that way it's even just a thing with the name F but it could be an advantage could be this could be that it means something right, yes, so the main feature of this model of physics is that we are using the structure of essentially symbolic expressions, but they mean absolutely nothing, in other words, every part of you knows each element in the expression that represents the universe. just one atom of space and there can be 10 to 400 of them and they don't mean anything as such, right, but we still know that we're thinking about the same kind of mathematical structures or sets of publications where you're saying you know that this atom of space space has to be created before the Saturn of space that kind of thing, but the atoms of space don't have, you know, it's not the case that one of them is the I don't know where Bob lives and another is the place where he lives.
Jim or something, they're just arbitrary atoms from space, so in other words, and I guess I mean what I'm curious about, okay, then my statement may be completely wrong and and maybe some both regarding something like the Petri nets as with respect to category theory, actually, that the potential use of what our model does is to take the structure of one of these things in a way that is completely devoid of meaning. So for example, when you write down one of your known sequences of morphisms and a correct category, each morphism means something that you know, you are not just saying that it is an arbitrary structural morphism, you are saying that this morphism is a mapping from vector spaces to spaces vectors, I mean, it's not just that you know they're not incorporeal things, but hey, once you have the category, then that category can be used as a definition of those morphisms, right?
Yeah, so I think it's actually the same thing. What we're dealing with, it's the same thing with Petri nets, as long as you know if you know the arcs, if you know the locations and the flow

relations

hips, then it doesn't really matter what the individual objects are called. What matters is the combinatorial structure of the Petri net, I suppose, but as a practical matter, when people think about this, the Petri net, the fact that they put things like p1, p1, p2, etc., shouldn't know, but that's all. the right use case, I mean, it's like you told me, imagine in the Wolfram Language that you're dealing with expressions that mean absolutely nothing, you're just structurally throwing in, you know, disembodied expressions where every function is function f. it doesn't do anything right, ok, I would have said what's the point it will never be useful for anything, right, it's because it's just, but that's the basis of our physics model, in other words, it may not be when we think about like this that I'm just trying to get into the mindset of thinking about things like category theory as something purely structural, so you're talking about category theory as a way of understanding the correspondence between what you know in differential topology and something in you.
I think probably one thing we could try to dig into a little bit is that Andy's big construct isn't the easiest thing out there, but it's exactly a way of taking basically semantic information and making it purely syntactic, so you basically start with things about meaning and you somehow compile and transform these meanings of something into something that is actually purely combinatorial, so this might be interesting, maybe let's try it, let me try a couple more things and then maybe we can dive into that, I mean, so, one of the things that we had thought about a little bit, okay, let's go over some things, so this multi-way graph, let me comment on a few bits here, so let's just draw it, yeah, obviously, one point. totally trivial. but there is a one-to-one correspondence between the execution semantics for a Petri net and the event selection functions for a multi-way system.
Okay, that's interesting, because the Petri net, by definition, is a very trivial point, by definition it's just not the right non-deterministic because you can have multiple transitions that activate yes, but in practice people find execution semantics To avoid non-determinism, which is exactly what we do, we like collection functions in multi-way systems, so okay, 4-bit CEO Matteo, just to make sure. we understand what a multi-way system is because without that we will be lost, so a multi-way system simply says where we have some transformation in this particular case it is just a string rewrite transformation, Diego's baby, okay and All this means is that you apply that transformation whenever you can, okay, so we can think of this, for example, this you can also think of this as a test.
This is proof that BBB AAA is equivalent to BBB with the rule. that ba is equivalent to B there is a proof and the proof we can say that a particular proof would be some path through here that leads from the input to the output, yeah, right, and that's how our multidirectional system is and in our world. I mean we can talk about how this relates to quantum mechanics and who knows what else, but this is a structural object in our world, okay, so what you know, a question would be nice, so that's one thing , the next level is evolution events. graph so this says that this is dual so this is what this is doing is saying for each of these transitions from this state to this state that happens under an event that event says that this is the BA that we are rewriting as B in this case, there are two bases that we could rewrite as being there in different places, they lead to different exits, sometimes there will be a merger of those exits, etc., okay, so in this particular case, this is a system of confluent rewrite.
So although there are many branches, we always converge, yes they always become, but that is not always the case, that is correct, yes, that is correct, it could be the case in physics, it is quite significant if it is the case in physics , but I mean it happens. leads to special relativity and leads you to learn about quantum objectivity and all sorts of other important things in physics, but that's irrelevant to this, so the first thing one might think about is to what extent this image here could be considered related with a real diagram of morphisms and a category, I mean we can think of these states presumably as a category and we could think of these transitions between states as morphisms in that case, yes, that's right, it's very possible and at the same time In Right now, if your system is one, your cattle will be a poset, which means that there is at most one morphism between two states, but obviously if you have more than one way to go from one state to another and you don't want to do it. say, identify them, then you only have one category, okay and notions like confluence and causal invariants can be interpreted in terms of statements about the existence of cocaine for associated codes, things like this and categories, yeah, yeah, right, I mean to the assertion of global complements. effectively, a statement that for each cone there is an associated code code what is a cone in the categories that a cone of light or is it something completely not not something completely different are you sure it is different? yes yes I mean, I don't see an immediate correspondence between light cones and this kind of thing, my nightmare, what is it?
I also think they are different, how do you see the cones? In this case, it would be the cocoa badge cones, so if you have four, instigate which one. This is a bad example and I'll see if I can build one in a moment. I can't, I can't talk and do a construction at the same time, okay, but yeah, my understanding of a cone is, you know you have some object N and then you have a family of morphisms that leader yeah, it's literally a now you have a diagram here in your category and you have another object in the category with a lot of morphism to go through to each thing exactly and so I understand that a coconut cone is the dual notion that if you reverse the arrows you get a correct coconut code, so yeah, whenever you get a branch in one of our multi-use systems, I think you can represent that in terms of the existence of a cone, because the predecessor element is the common elements to which a family of morphisms and then, as long as there isWe could call finding equational proof with hypotheses and axioms between which we are equivalences. test objects, it happens that we don't currently support it, although I thought a bit about how you could do it and what you could get, and one thing I'm interested in potentially implementing would be that you could then have a rewrite secret because every step in standard test objects right between zero-order symbolic expressions is just a rewrite relation and a position, yes, but then you can, you can apply a rewrite relation to that rewrite relation, right, that post rewrite relation has two paths that have a left and a right and then you have subexpressions inside the left and the right, now you can apply rewrite rules to that and so you could get a proof of equivalence between pre-test objects subject to certain axioms of equivalences between test objects and and and so on, then you could construct some higher order generalization of a fine equation or proof and each of those higher order pieces can be considered as a homotopy between lower order proofs, okay, so let's think about that. for a second, I want to say that one thing is that we are not in an inadequate category fight in some sense in the sense that our programs are equal to data and they are not good enough because our test objects have a structure quite different from the objects on which we are testing.
Right now I mean, in other words, if we look at a test object here, you know this is something qualitatively similar but it's not exactly the same, it doesn't have exactly the same structure as the real things that we see. We are doing tests, but suppose in this case, with this test, we could do what is the test object? The test object is a path here, right, yeah, right, so you're saying what we're doing is what we're saying. the original object is a string the test object is a path now we are asking for correspondences between the correct paths so that you can subtract as many deformations from these parts, what do continuous deformations from these parts mean without a continuous boundary?
I don't have a good notion of continuous, yeah, I don't think it makes sense. I mean, continuous warping somehow implies that this thing comes out in some kind of continuous space. Well, go ahead, no, I'm just saying maybe we can. attach a topology to this just from there the rules, the most significant type of topology that you could come into contact with this is called cool and ecology. I think when you said Griffin dyke, what did you say that he's a big figure in this whole infinite transporter thing? anything else, as soon as you hear a particular name, you know you're entering some deep, disgusting abstraction and because one of those names was where you know what very fascinating but typically very abstract ideas, what he did was he basically took the topology axioms so that you know arbitrary unions of finite intersection and the categories combat them, which basically means that you can now endow a category with a topology called the great Rapala G anecdote in a way that if you started from it is just a topological space and Do you have a way to create a UK topology that matches the one you started with, but in this case I think this is particularly relevant because these graphs can be seen as a process and as soon as you start considering the category of paths over these graphs, so this gives you a good set of tools to think about topologies instead of twisting, okay, I want to understand growth and find topologies, this can be a difficult task, it can be difficult, but I want to understand how it works, so first thing What should you do.
I understand it's the seed concept, the idea of ​​a sieve is that you pick a vertex in your graph, whatever you want, so is this a graph I could use? This could be a and we can use this without a problem if you choose one as such. randomly choose that perfect vertex, then a sieve is a set of arrows towards that vertex, which is, I'll choose this vertex instead just to give me a few more arrows, okay, no, it's not from, so if you choose that one or fewer arrows going there, they look good, so you take this one and basically the idea is that each set of arrows that are closed by the previous composition for a C, for example, you can generate a sieve by taking the arrow that goes up. first that goes from ba-ba-baby to be baat, let's say if you go back to the selected vertex, what is this vertex here, then if you take that vertex here there is only one possible seat at the first vertex, which is the void because you don't have no arrow coming in now if we take the one, two, three, four, the fourth vertically from the first, no, no, okay, if you think it's okay, let's choose the one in that case, you're enough to increase. the size even more, let's increase it one more step, the problem is that if we increase it much more we want to be able to see a lot of the graph, okay, here we go, so if you take B, it will be one lumen, it is the second, okay, this one has up is no it's the second from the top this one here oh I think that's okay yeah oh you might have a mouse lag yeah exactly so B ba ba you can only have two possible sieves there , the first one is emptiness, so you can't consider any arrow and the second one. is the one that goes from BBB AAA to B ba ba okay and now if you go from being to being to B you see that the situation is a little more varied because you can have the empty strainer but then you have two different arrows entering that vertex so you can consider the sieve that goes from what is generated with the arrow that goes from ba b ba - ba ba ba yes and again if you take desire you have to also take the compound desire with the other that enters to be a being BA and this also has to be composed with a BBA a so this is closed down okay so all we're doing here is taking the arrows back up and yeah Precisely what you can do is take basically you say a sieve is just a set of arrows that are closed with respect to these things that go backwards, okay, you're saying a sieve like a sieve or something, yeah, look at the cursor. yeah, okay in graph theory, isn't that just what does that for us? vertex in the nape vertex component and components exactly yes, right, so this is okay, so you're saying it's the vertex, the infinite vertex in the component of the particular node is the the sieve of that node is that right, you can have a lot of seeds because again, for example, in the case of bada be, I can start by considering only one of these two arrows going inward or neither of them or both correct, but so what? what we are saying is that there is a vertex in the component for B to B to B and we could say, for example, that the vertex of a level in the component is simply that those are the things that will lead to that is the zero if the vertex in the component is the very thing that is the one that is that pretty image, so to speak, in that previous image, if I go to the two level vertex in the component, I will get that state there.
I can go to level three, 1, and so on, but you're thinking in terms. of the vertex now okay, you should reconsider in terms of edges, the CV is a set of edges, okay, okay, okay, so we have those edges, we can generate them. I'm not quite sure exactly how to do it, that's probably You know, the interesting thing is that again there are a lot of different sieves that you can put on the same thing because after specifying basically a bunch of arrows you have to do the child composition down with all the other arrows you have, but for example.
If I say right, what is the sieve generated in VA EAP by the empty set? So this is the empty seat because you know you don't have any arrow to start with so you don't have to do any kind of composition if it just starts with the arrow that goes that starts at Bab ba then you can take care of that kind of fat okay , we have a build for the sieve now, what do we do with the sieve now? The point is that this thing can be used and symmetrized the equivalent of an open coverage of a topological space.
An open coverage of topological spaces is simply a set of open sets so that their union covers the entire topological space and in this case the idea is that for each vertex you want to choose a sieve and you want to do it in a way that there are some compatibility conditions between the different sieves that are listed on the graph topology page and basically what this provides is an abstract open covering axiomatization that you don't need. you know talk about topological spaces and this actually behaves formally like an open cover in topology, so it basically allows you to use a lot of geometric machinery even in spaces that are actually discrete like this and then you know.
If you have a notion of open coverage, automatically, if you dig into these things, you'll get an appropriate notion of amo-topic equivalence, for example, so you can get an appropriate notion of continuous function and continuous deformation, so the idea is that , given that this doesn't really look naive as a topological space. This is the most abstract way of putting a topology on it and that would be what I would start with as well because I can ask a question: yes, is there some freedom and how is it done? attach this topology, yeah, it's something like that, so we have some options here, but we could make a topology that is somehow compatible with the rules that they generated, that's exactly what you want to do, that's exactly what you want, yeah , so you can see that.
There are, for example, two choices of what I would say trivial topologies that you can take here. The first is for each vertex, you take the largest sieve you can see, just consider literally all the arrows that go at that vertex. Do this for more than way that is compatible with your competition rules because in that case then you are getting some geometric representation of what you are doing computationally and that is how we would do this. It sounds very interesting, but what are we really doing here? So we're trying to make open sets we're trying to make something that's some kind of growth in key D and analogous to an open set yeah, okay, how do we do it like this in an improvised way?
I'm not sure I can answer this idea, I think it's relevant here to understand that there are a set of ways to get to the specific node of the graph you're considering, so I suppose in your model it might be interesting to see the C's as the paths causes that you may have attached. for a surgeon, no, because there is more information there than is represented in this graph, so they are like Euler's exact intricate causal structures that are lost here, but maybe that is the correct source and therefore Geometrically, the way to interpret this is that my right in saying that each of those choices of type of open subsets to consider part of your sieve corresponds to a different choice of Joseph open dips of that open set of dips.
I'm trying to get a geometric intuition for white, how we think about this. The freedom to choose which family of open subsets to consider part of the sieve is some freedom of local immersion choice around that, so yes, basically in the topological case where the sieves actually end up with corresponding critical topologies and those corresponding to Open decks basically. your notes would be the open sets of some topology and the times would be inclusions, exactly, so the freedom to choose your sieve is exactly the claim that there are many different possible sets of open dips for exactly those openings, so yes, maybe I said yes, you are more or less what we expected.
I think so, so you're basically saying that's fine, given this open set. I'm considering this set of openings that sort of build into this in different ways, yeah, and then you see that since you're free to open, said you're considering a lot of other open sets that build into it and you want to do it compatibly. because you're going to make this or every set of presentation dishes, so you have to be sure that these inclusions are all behaving well with each other basically, so I don't like that you can. I mean, I think the most intuitive notion of topology here is actually the notion of coverage, which is just a choice of arrows that you would know as your past. arrows, okay, no, no, and then you can generate everything up to genesis and then you get a proper topology, but when you specify a covariate for an open set, you specify it as a family of statistics covering but as a true notion of corresponding sivir to that.
These should only be the open sets you choose, but also their subsets. I think you get them freely. You know, you just specify the big ones and say those are the ones I need, the space fragmentswhich I want to consider separately and you know whatever. is inside the fragment, it's long and the same for compatibility, it freely adds them exactly, so with the causal graph of evolution, all of these properties I think are satisfied recursively because each of these, each possible dipping option open, effectively corresponds to a different choice of update order or a different choice of a space-like hypersurface foliation and, as long as it is consistent with the causal partial order, then we know that it is also an open immersion valid for all subsets of oak and so on then. that gives us a valid rock critical topology, wait, that's not what I mean, let me try to understand it, okay, so you're saying that we look at the causal graph of evolution correctly, so my point is simply that you know to the extent that I understand it.
Each sieve choice, which is an oak dip choice for some open set, is really just a refresh order choice in the context of a multi-path system, because it is a choice from a collection of incoming arrows. It should be right, yeah, right, and we have a way to parameterize that in terms of you know in terms of collections of space as separate update events that satisfy causal partial order, so the condition that these things work well between yes it is the condition that they are compatible with the partial causal order and then if they are compatible with the cause of the partial order then we also know that to get to that point they had to be compatible with the previous causal edges and so we not only obtain the validity of the open subsets in that sieve, we also get the validity of the emotion of the open subsets in those open subsets and so on, so all we need to know is the causal graph of evolution, I think we are claiming that these different options of open sets are different, they are effectively different foliations of our system, well, that's where I was, that's why I was asking, that's exactly why I asked about dips, because I think that's the right way to think. so you think that the choice of open sets, so that I understand, then you think that the choice of open sets is basically exactly equivalent to the choice of foliation x' make the cord be with another way of looking at this because apologies for the blame gratins are actually and instead of our even more general notion of topology with a low-V lawyer topology, and that is more logical in its interpretation because a low-V lawyer topology is defined as a kind of modal operator on the logical order of this, but the point is that you can think of choosing these open sets as a way of choosing which way you can evaluate the truth of a claim at that point, so I guess it really supports affiliation interpretations.
I mean, I can get to this point in many ways. ways, but as a server I have this set of things that I look at and that if I want to evaluate the truthful statement here exactly, exactly, so the most money, my question is, my point is this guy's cell that I made. I don't understand what they write this guy from Capone says Latvia TN e is that how yes of your trip yes both how the WEA re I will be re TI er any why I think, but if there is a there is a - there is no K okay, vote Yes , yeah, but I'm just trying to decode the tournament or something, yeah, okay, and for this to explain what this is again, I mean this is okay in the context of tapas theory, which is like a really important topic to capture. now anyway in any category as a notion of logic that is internal to the category is like a language that I can associate to a category to talk about the objects of the good and stupid example, if your category has a product, then logically this means that in your category you have the notion of taking pairs of things, yeah or, for example, the closest structure that we were talking about, yeah, what was the hair, oh, that's something that you can use in your internal logic to make the types of functions okay, there was a pretty big Johnson.
Can you explain that yes, of course, so before you knew, okay, in the context of what we were discussing before, we were discussing these notions of a typographic theoretical interpretation of true and false, right, yes, there is an idea brilliant what do you know this about? type true that has a single element, has a false type that has no elements and can then say that a statement is true or false depending on whether the true or false type is inhabited for that statement, the notion of a lower level topology new, as far as I understand it, is that it gives you a way to generalize that idea so that you can talk about local truth, so you can say that you can say that a type of truth is inhabited locally with respect to the maximum loss even if it is not inhabited globally, so in the context of type theory it allows you to say that certain theorems are correct locally within the region of that term, but not the main thing is exactly true globally and the only way to see this is that basically you are, it's not really the appropriateness of a word, but it enriches your raw values, so instead of, for example, saying that a logical formula in a set is something that you know zero one, for what you can say you have an evaluation function that takes a formula and assigns it to zero or one if the formula is true or false and instead what you have in this case is that your topology of the underlying space will give you the structure of the truth values ​​in your category, so you don't have more true or false, but it's true by giving them a homotopy type theory, find an equational proof interpretation of that, you know, so if you consider it, If you consider it the topological space that corresponds to the one you know in homotopy type theory to the type that is, you know all all the theorems that you can prove using a fine equation or a correct proof, you can choose a particular term that corresponds to, say, the statement of commutativity and we can say that that associated proposition is true locally if by local you mean something like in the context of abelian group theory or something, although it is not true globally in this in the context of you know None of this is true intuitively and in principle, but you are treading very difficult ground because you are basically considering at the same time an infinite category and a higher structure where a tapas is basically the kind of place where you know you can talk about this. internal logic or taking the right path, the problem is that the promises of Infinity are not exactly the simplest thing there is, in reality, this is the real frontier.
I think about algebraic geometry Q. I think people have discovered things that probably go to infinity to help them get to the top, but generally these are very complicated and I think there are some kind of conjectures that basically postulate or prove that the internal logic of An infinite tapas is actually a motivic theory or some kind of theory, but this is surely something I'm not versed in. but it is also something that is still being researched a lot. There is definitely an approved correspondence between the theory I should write and the topologies of Olivia's technique for a very restricted case.
I forgot I was listening like before the sheaf or something, but I definitely seem seen. results in that area, but I want to come back to this correspondence between open sets and z-foliation... well, what do we learn from the correspondence? What's that? Maybe even limited, so what are the corresponding open sets? You're saying everything sounded. something really yes, so an open set would obviously be just a vertex and then the open dip would be the collection of update events that led to that vertex over the collection of causal edges between update events that led to that vertex, but what kind vertex here?
The state vertex is an open set. I think you can treat it as a yes, so the idea is that you formally see it as an open set, so Justin: okay, let me digress for a second. Are you familiar with the idea that it is meaningless? topology I don't think it is, I think I got it, I got as far as point set topology, but no, yeah, that's basically imagine this, we know that topological, if you take the set of open sets of a topological space, they form a complete distribution network. you know you can take arbitrary unions, finite intersections, so at some point someone said well, but I mean interesting topological properties actually come from the human factor and no one really cares about points, so why don't we study distributive networks complete? and try to generalize topological statements to the properties of these networks and that's pretty much what we're doing here.
This is no longer a topology in that strictly geometric sense, but the idea is that thanks to the grid in the topology we can imagine that these state vertices are our open sets, even if they have no real semantic meaning, they have something to do with geometry, you know, but I'm sorry, Steven, can I just mention one thing and that is that you actually know about this because we discussed it in context. of the continuous multi-way limit and projective Hilbert spaces, do you remember when you came across this notion of continuous geometry? You know there is von Neumann's construction of the projective as a projective Hilbert space purely in terms of a modular network and this is exactly that.
The idea is correct, so the sense in which you have a correspondence between a modular network and a projective Hilbert space is exactly the sense that prohibits Co from saying that the combinatorial structure of the modular network tells you about the

relations

between the subsets of the Hilbert space without actually having to tell you about individual vectors in Hubbert's day, ok, wait a minute, so you're saying you can map it. I mean, when you talk about sets, it's kind of a discrete, almost combinatorial thing, and you're saying that you can talk about relationships between sets without even thinking about the underlying thing, so the interesting thing about that and maybe that's what you were basically saying, is that what we have in our systems is that description of the relationship between what you are describing and what you are describing. as open sets and then the question is whether that sits on top of some continuous element.
I mean, you're saying that there could be a continuous element of which our system is the description of the open set, so to speak, yes, exactly, so what I'm doing. What I'm saying is that yes, is there sufficient generalized notion of topology for which these things behave as if they were open sets and we have a notion? I mean, after you have a topology, then you can define the notion of continuous function continuous mapping over these. topological space is fine, so what is it for us given this idea that open sets are the nodes of this graph?
What is the notion of continuity? So, well, the point is that thinking strictly in open sets is a little misleading because what Annique's big topology does for you is that it sets things up in terms of open covers, so all this choice of sieves and such gives you the equivalent of an open cover for a lot of space, so the first thing you should do would be to describe fewer continuous functions in terms of open covers and not in terms of open sets, that would be the first step. I'm sure this is a function on this multi-way chart.
A function means that you assign a value, for example, to each node in the multilayer graph. Yes, we are mapping these. these vertices to something basic, so we assign a value to each vertex, yes, then a continuous function might be good. What would be an example of how we could define a continuous function? If we are doing that, what is a continuous function? So use the definition that says that the continuous function returns open sets to open sets, that's really good, yeah, so if you have two of these graphs like the one I'm looking at now on the screen and you have a mapping that sends the vertices of one to another and if we think about open sets of vertices that you want to remove, actually here the correct notion of open sets would be not a single vertex but the entire set of vertices and edges that go and arrive at the set where it is there for closure from that vertex to the right, this D is the continuous motivation array, sorry, the motivation we started with was trying to think of Mata P in this context, so that was what the continuous deformations of the paths look like, Are we at a point where we can think? about that or because that's something genuine, I think it was super interesting, like what you know, which two paths are almost eternal and which paths are close to each other, which ones aren't.
Can we answer that question? I actually want to I want to make my suggestion here so we have this graphics idea.of branch and hill, which is a way of deciding which nodes are close to which nodes. We could imagine a similar type of thing that asks for paths that The path is close to other paths in the same type of construction, so a golf branch is built taking each one. I mean, you could take the Branch Hill graph here saying which nodes here are related by being the successors of a common ancestor, okay, so I would say the analogous thing would be if you want to know which paths are close, we could imagine doing the same for the pods, we could imagine making a branch and hill graph with respect to the pause and we continue talking about it.
About that Jonathan, do you have any comments? It's obviously stupid. I mean, in other words, we can get a map. Let me take an example that is more interesting than this. What is a good example? Let's see, just try to get it. an example of something that would make a reasonable branch, cough, wait a second, I'll pick one up, but no, I mean, I agree that it's obviously something worth building because yeah, I mean, once you have the interpretation, I mean the topological interpretation of a path, some parameter, like effectively a parameterized family of vertices into components and then you ask if those you know if the open sets in one of those families map to open sets and the other family, so Yes, and then you definitely have the notion of being able to define the distance of the route, so let's talk about that.
I think this is very interesting. Okay, so let's explain to people here what the branch field graph is because it's relevant. Okay, so this is it. To me this is a multi-way system, let's see what it is, let's just draw this multi-way system, it's kind of stupidly complicated here, so we have a multi-way system, this is a very trivial multi-way system, let's say four steps and let's draw the state graph here okay so the graph of this state now I have to make it wider sorry so let's look at the state graph this says um these two states here or let's take these two states here they have an ancestor common here, so that means our rule for making a Graf eyebrow gel is to join two states on the graph branch if they have an immediate common ancestor.
Okay, so those two states have a common ancestor. Let's say these two states have a common ancestor. These two states have a common ancestor. ancestor, so now let's make the branch layer graph, let's make the branch layer graph one, two, three, four three steps, okay, so the three golf branches should be a map, basically, this level from the multi-way graph, okay, it's saying how these states work. are related to each other those two states are quadruple a triple B quadruple a triple B is related to triple a b a b b by being linked to that because they have a common ancestor that makes sense mm-hmm okay, so this is a way of now what the guys are saying that there is a way to relate states by looking at their common ancestry.
Now what he effectively seems to be talking about is whether there is a similar way to relate the cards to each other, so that is the question of Homma's topic: what exists? is what path so in this image we are saying what is the distance between this thing triples that thing and that thing we say that it is the real distance of the branch to well then in quantum mechanics the interpretation of this is this is the map of entanglements between states and this is telling us that the entanglement distance between these two states is two in this case, okay, so that's the interpretation that the DNA meter distance approximation will cause, okay, that was what I was about ask, what is the physical interpretation, okay, so let me.
I understand what you're saying, so the ADM indicator each of those paths is a choice, it's a foliation choice that we can parameterize in ADM, right? such path boundary choice is not a foliation of the family oh, I see each path and multiway graph is a particular foliation. Yes, you can represent it as a particular foliation, so you wonder what the distance between two foliations z' is effectively. So my guess is that what you're going to get in the end is an approximation of the travel to gauge distance, which is the total distance of turns and the total distance of changes between those two gauge options, how is it measured in the formalism IDM?
How do you measure the distance between? Well, then, a choice of gauge, well, that means that that's what the ADM formulas contain or that's what's fine, to be more precise, that's what my discrete generalization, the alien formalism, does, you It says you know you have a hypersurface. The way you can parameterize how to get to the next type of surface is for each point on that hypersurface, you say what the time is as the distance between that point and its corresponding point. Hey, what's the number of causal edges you have to traverse and then? What is the distance as space, that is, what is the combinatorial distance of the hypergraph between the place where that update event was applied and the place where the next update event was applied?
As long as you can find those two quantities for each event that completely determine the foliation of the causal network, then if you are given two foliations x', you can ask what is the total span distance and change distance i, time total as distance and the total spatial distance in a construction, say as a volume average. about that about that whole causal network and that effectively gives you a wave that gives you a metric in the space of possible indicators and then I think basically what we're doing here is building a heading approximation to that measure, okay, let's try to unravel this. for other people for a second, okay, so the Adm formalism, so we have a differential equation, a PDE, Einstein's equations, let's say, and we are effectively trying to solve it as an initial value problem, so we are taking a space as a hypersurface that is a simultaneous. surface and we're trying to figure out what are the values ​​of the parameters that you already know, curvature, anything else in a later space like hypersurface right, and then what we're asking is there are many options for our foliation of space in terms of the space like hypersurfaces and what Jonathan is saying, you can parameterize those different sets of options by giving at each point a span function and a displacement vector, a span function that tells you how far in time you're going to go to the next one. place as a hypersurface, a displacement vector that indicates how far in space you go from each point, okay, now we have this family of surfaces that corresponds to a possible ADM sequence, so to speak, we have another family of corresponding surfaces. to another ADM family and ADM sequence, now you are saying that there is a distance metric between those two families, but I didn't quite understand how you find that distance cushion well if you take this, let's say the initial hypersurface and family one and the initial hypersurface and family two, you can overlay them on top of each other and you can define them now, of course, they're probably your global hypervelocity conditions, but in the standard advertising game of formalism they disappear because those types of surfaces will obviously intersect each other, but still a span and a gauge can be defined, it is true that the hessians can be negative, which would obviously not be allowed in the conventional formulation, but we can allow it here and therefore you can take a general view. can calculate the overall span and change distance between those two hypersurfaces that are like this, they are two different hypersurfaces for the same appearance, then what you are saying is a false evolution rather than a genuine evolution of a hypersurface in from model 1 to hyperservice in model 2 is a false evolution that only claims to use the same formalism to make the comparison between these two hypersurfaces exactly like this or, in other words, instead of comparing hypersurfaces at two different values ​​of the universal time function.
We are comparing hypersurfaces for the same value of universal time production, which are different models that you compare to make two different foliations. Yeah, right, aren't you the defining factor of a sheaf of doing something similar? Because it seems to me like we're trying to What I do is say: I assign these things to some things and if these assignments overlap somewhere, I can elevate this to an assignment at the union of the two overlapping things. Did I understand this correctly or that doesn't sound completely incorrect, does it? The way I think about it, but yeah, sorry, go on, yeah, go ahead.
I mean, this corresponds with the sheaves. I've never understood sheaves, so I can give you a simple example. Rates XI, let me try this trick. Here I am. I'm trying to change my camera now you should see my hands. Oh, I'm not technical, unfortunately, so I'm using a very, so at least you're left-handed. All interesting people are left-handed. Thanks, yes, I'm left-handed. delivered, so imagine this is my space, it's literally like a plane and nothing more and now I want to attach some kind of information to the regions of the plane so, for example, I can have this region here and today I want to attach a set of things .
Again, it doesn't matter what these things are, this could be, for example, the set of continuous functions, finding this thing, yeah, I said, let's do the example with continuous functions, this is a good example. I have my space and I take this region here. and today I attach the set of continuing functions in this region. Okay, now you see I can take a subregion here and attach a continuous function set up like this and now what I have is this is included in this thing and here are things. contrary variant like you can take any continuous function here and map it to your constraint on this little thing.
Are you following me here? I think so, so the idea is that in category theory, I'm a base category called C, which is cold-eeze open sets you call this V, for example, ordered by inclusion, so, for example, I have an inclusion of u into V and this is assigned to establish the contravariant Li, which means that if you go into V, then continuous functions on you, yes, continuous functions. in V Mach 2 continuous functions on you via constraint cool, now what a sheaf tells me is that this assignment is consistent, which basically means two things, the first is that the reason the superposition condition allows me change, then the overlap condition says this, imagine I have this. situation here I have this small region and these big ones now so we've said since this is included in these two I have two mappings like this okay yeah now the idea is let me see actually I think any big yeah basically the idea is that if these two things match in the overlay, I can join them together, so from this definition and this definition I can basically see where these two things are sent and then get a single mapping of all of this.
Let me give you a one-dimensional example because it's probably easier, let's imagine this is the real line and you know I have continuous functions here for each open set. I can basically map find's continuous functions onto this open set now clearly if I have to open overlapping open sets and two continuous functions. I agree on the superposition that I can find a great continuous function defined on the Union just by pasting them. Yes, these are analogous to the type of two selected graphs in an atlas construction for the Minnesota laser. In fact, all of this stuff in algebraic topology and differential geometry can be rephrased in terms of sheets, there's actually a correspondence between sheaves and sheaves, so you know where the boss is.
So far we only have correspondences between things like graphs, where is the sheaf, the sheaf is basically this correspondence, so it is given a category C. that represents its base space, a sheaf is just a task to set up, so again this means that to each region I attach a set of things and every time I have it and in an inclusion year this corresponds to a constraint here so that these pasting conditions hold and Furthermore, there is a local condition that means that if two assignments agree on every open set of an open coverage, then they also agree on the entire set they cover, so this means that information is locally defined if two things agree locally everywhere and they have to agree.
In general, this is not a super clean and nice way to say it, but the idea is that this is a particular type of sheet that I attach them to. This is a functor to set and the idea is what object am I on. I'm attaching, I would know continuous functions and two injections. I'm mapping constraints, but you know I could do things more generally or I could say that to these regions, I literally attach a set of things and tomakovski space, because we are looking. inertial frames in Makovski space we have a very trivial collection of possible x' foliation families and there is no topological obstruction when going from one inertial frame to another, if I understand correctly, no, but there is a topological obstruction in the sense Well, okay in this particular case, okay, you've chosen the only case where comb technology doesn't help you because look, because the local choice of gauge determines the global choice of gauge for an inertial frame, that's the definition of an inertial frame, but for anything other than an inertial frame, simply knowing the local gauge choice doesn't tell you enough, you usually need all the information about the local gauge choices to determine if it is of a globally valid choice of gauge because you need to be able to determine if it is consistent with the causal partial, so let's understand what he means by that by local choice of gauge he refers to a coordination, some space-time coordinates, a basis without , just, a local coordinate patch, just as the count referred to Well, then we have for anything other than the trivial case of an inertial frame, for anything where there is a non-trivial curvature, for example, or any coordinate system that is not just a flat coordinate system, we have all these local options. of coordinates, okay, now you're asking how we transform from one and I'm also confused by something else: what is it in a fiber bundle for example, we would also have a local choice of coordinate system and then we have this notion of a connection which tells us different, you know how we move from different local patches, that's all, but in this case we just have a bunch of local patches that we're configuring and we have another group of local patches. patches that correspond to other coordinates for another foliation and you're asking when it's obvious that in the back the details of this are in the general relativity paper, but our analogue of the notion of connection is a collection of weights. of the hyperedges or causal edges that exist in our networks, so that you know the default choice of the savita slight connection that you make and general relativity corresponds to the statement that everything has unit weight and, in particular, whether it has an edge that is connect If you know the vertices U and V, then you also have an edge connecting the vertices V and u, and those have exactly the same weight, but if you relax that condition, then you have been allowed more general types of connections, and in particular, you can have for torque metrics and stuff like that and what page do I go to Jonathan to give me a chance to oh it's late wait let me find it, it's somewhere, it's in the place where I define Riemann curvature in hypergraphs, open sets. talking about open sets here oh yeah, okay, if you look at pages 29 to 30, what the hell is this?
I didn't know you put this in, this is the arbitrary dimensional version of the path, some Walker metric, yeah, oh, you told me that. to put that, yeah, well, I thought that's a good idea. I didn't know anyway. Sorry, okay, just 29 to 30, okay, so if you look here, okay, at the end of Stein's bachelor distance definition, we have this. epsilon parameter, so epsilon is a coupling parameter that you can think of as the elemental weight that we are assigning to each hyperedge and later on I say that we know that it works, particularly considering the case where epsilon is U as a value of unit forever, you know for everything, yes, in which each hyperedge is supposed to effectively correspond to a unit of spatial distance, but you could have considered that, in that case you could have edges that had different values ​​depending on which direction you have the torque metric solution. you can actually define it, it results in the continuous limit, you can define arbitrary connections this way, okay, but again we're trying to figure it out, okay, why do we care about this?
So one of the reasons we care is because of what we're talking about. These are transformations between possible reference frames, which I claim, although no one else believes me yet, will be important for distributed computing and thinking about that is that this idea of ​​what the possible reference frame options are, you know , it will be important, but you. you're saying what is there an obstruction to continually moving from one choice of one family of reference frames to another what is that what is the physics what is the general activity version of saying that there is a topological obstruction then that would be the claim that if know if you have space as height, you have two levels, two sets of possible levels corresponding to two different options of universal time function and you want a constructor, you want to construct a continuous smear from one to the other, there will be certain classes of hypersurfaces for which, Although the two even points of start and end, the initial and final hypersurfaces are both consistent with the partial order of the lurancy and the manifolds, there are intermediate points in which it is not answered that they would correspond to a topological summary okay, but what Is that physically?
So you're saying I don't have a good answer. Well, let's think about that for a second. I mean, you have two choices of frames of reference, two choices of them. and what you're saying is that when you move between these while doing a smear of the coordinate system to move between these reference frame options, you hit something that's not right, it's a trivial example, so if you consider imagining a universe with a cosmic event horizon yes you have an observer on one side of this cosmic event horizon an observer on the other side right, they have different frames of reference that are naturally defined by the expansion of the universe, but there is a topological obstruction that prevents you from transforming from one to another that is the horizon of cosmic events okay but this is more general so there is no she is at home Audrey go ahead I don't say I agree I mean it this is exciting precisely because you know the The cosmic event that arises is an obvious topological destruction, but there may be very non-obvious topological instructions that we don't know about and what it means when there is a topological obstruction is that this choice of reference frame that we are thinking about is related to these options that can be considered as a beam a choice of these makes that there is a way to essentially break up the space of all possible options of reference frames by breaking that discretely because of these topological obstructions so that we can essentially classify the kind of equivalence classes or something that we are continuously deformable groups of frames of reference of others, so physically, my God, what the hell is that physically?
I mean, that's what it means when there are two, for example, let's take the black hole, for example. are there when you look at different coordinate options, whether you know the Crisco coordinate, then this coordinator of that coordinate you see things like this happening, is it continuously deformable from the original schwa chill coordinates to some more modern coordinate system for the structure? black hole well, no, I'm serious, first of all, if you go from saying "okay", the particular case that I think is probably the most illustrative if you try to continually go from the Schwarzschild metric to the girls front panel in each metric, then due to the existence of the coordinate. singularity there isn't one, you know that's a topological defect in your coordinate system, so there's no fluidity, obviously, if you get rid of the coordinate singularity, then you can make a smooth transition between the two, but otherwise Otherwise you can't because the coordinate singularity has to go somewhere, so the existence of the coordinate singularity, I mean, I thought it was a special feature of a wrong coordinate system that would have a cool effect, yeah, exactly that's why the Schwarzschild coordinate system is not. a good one if you're really interested in black holes, so let's take an example on the sphere.
Can we get an example in the sphere where there is? I mean, there are a lot of coordinate systems on the sphere that have all kinds of crazy singularities, so if you're asking for different coordinate systems you're asking for weight it's like asking can you map from now on what the projection of the Riemann sphere does everything is that the Merc is all projection I forgot my projection I forgot the projection of the seven ology map but you know that it is a Lambert like a Lambert, right? Okay, okay, but if you wanted to go from a polar coordinate system to something given by projections on the Riemann sphere, then you would be in trouble because there is something good, so within the map projections you are saying yes. t for some map projections there is a continuous warping from one map projection to another, so I mean, if I take, let's take a random collection, let's take this, it's guaranteed dangerous stuff, but let's take a random sample of 10 maps. screenings just for the sake of them, I bet half of these aren't going to work, but anyway, um ah, let's say we say something like this, it's really kind of horrible to see this, even invasive works, but, then, you're talking about me and just trying to be concrete when talking about map projections um Wow, why is this so slow?
Maybe I have a bad feeling that these spc map projections are something really strange than um some kind of empirical projections, their only coordinate graphs. Oh, look at that, well, let's see. just try this AG oh that's the wrong thing for 26 I think they're not uh what am I trying to do here? um Oh, he wants to say sorry. I just want to do something where we can really acknowledge what's going on. I mean, we can see. something is happening between these map projections right at that intersection, okay, here we go, okay, here are some map projections, so what you're saying is that there are some pretty original map projections here, but you're saying that some map projections will have the property that we can continually get from one, say that guy to the sky, maybe, but other map projections like this crazy creature here, um, maybe not, so I don't even know what characterizes one that we can get from one to another continuously.
It may be a piece of math that I should know, but what you're basically talking about is when is it the case that there is a confirmation, yeah, I don't know enough about map projections to know the answer to that particular question, but so in the case of gr, like I say, if you want a map, let's say from the Gold Strand Pandava coordinates to the Schwarzschild coordinates, you run into this problem, you know, by definition, this thing gets the coordinates and Garetty is a discontinuity, so locally around it in the region. where that coordinate singularity should appear, it should appear, the mapping from one coordinate system to another necessarily becomes discontinuous in that neighborhood, right, UM, right, hey, we should finish, it's getting too late for everyone, um, it's well then.
How fatal it is to try to summarize a little, so I'm still small, you know this whole question about how category theory helps us and what is the correspondent of what we are talking about in category theory. We are somewhat slow. you know, circling around that question and the answer to that question, um, and you've raised a number of interesting things about, I mean, I think I'm starting to feel a little more confident saying something about this notion of mapping. between coordinates but between reference frames um or between foliation but I don't think we understand that yet um I think you mentioned that we've talked a little bit about the details of things like even Petri nets and their representation of distributed, you know, this is the analogue of distributed computing.
I don't know, this has been a complicated discussion and I didn't really get my answer to how I think about the category Infinity either. I mean, yes. I have one more chance to try to understand what they are, do you know how I should think about it? I mean, it's like you keep, you know, like it's an automated theorem that proves that you keep talking about the correspondence between the correct point of view that can probably help, there is a notion of higher category that is called by category, in Category theory requires that you know that the composition of morphisms is associative, for example, and that the identity laws hold equal, but as soon as you also introduce higher morphisms then morphisms between morphisms you see that you can relax this to that you can say instead of a circle B Circle C is equal to a circle P circle C so the associative thing you say I actually want them to be isomorphic and isomorphic in this case it means that there are a pair of morphisms between morphisms, one of them inverse to the another, so that's connected to the idea I was telling you before that you can think of the higher categories as a weakening ofstandard category theory, we have a standard category and now we have weakened the definition because instead of having an equality we require that they satisfy isomorphism, but then I can do the same kind of trick.
I can consider the composition of morphisms between morphisms and I can ask whether this is satisfied by quality or even isomorphism and blah blah blah, and that's why if you keep doing this kind of thing inductively, then what you get is a infinite category where basically everything is and this is connected to type theory because the real difference here between having an equality and having like you What you draw here is that the isomorphism is that an isomorphism is a kind of constructive thing like if you really want give a function. The morphism that you want to build testifies that these things are this kind of weak associativity, so in this sense it's like the biggest test. relevant version of a category, you can think about where it is, you can't just say oh, step of the eighth of F from a be C to B C, but I also have to specify how I go there and if I specify two different ways of doing it in two different contexts It is then that I have to specify a way to go from one to the other to infinity.
I don't know if this clarifies things or really complicates it. No, I just don't have intuition. I mean, for me it's very useful to understand, you know we have proofs, we have correspondence between proofs and we continue that process, so Steven, imagine, imagine that you have a version of fine equational proof in which you introduce a theorem of the form test object equals test object yes, that's fine, but those test objects are themselves tests of equivalences between tests, etc., you know that they are there the individual substitution numbers are not substitution numbers in expressions the numbers of substitution in rewrite rules above themselves evidence from other substitution journals, yes, so the point I probably think is being made is that the kind of highest level of generalization is one in which you show that the two objects If each of those test objects is itself a proof that two test objects are equivalent, which proves that the first two test objects are equivalent, it also proves that the test objects contained in them are equivalent and that the test objects contained in the nose are equivalent, and so on. it is a great weakening of the other of the infinite number of equivalences.
You have to show that you just prove an equivalence and then, and then all the ones below it are the levels below it, like a sort of waterfall down. This is the notion of weak. must of the equivalence between terms in what it means means because I am still very I want to say, I think that and then I know is the model, the model for that space is the type of homotopy is the infinite category of type of homotopy two terms of type a , call them P and Q and now I can consider evidence that these two things are the same, so where does this evidence go?
Well, these will go in this type here, this is the type where the proofs of these two things are, great, but then in This thing I can consider W and W 1 and these will be two proofs that these two things are a yes, but again I can consider the type of insurance and you will see that this is already a b-motor type iron, so yeah. I understand how this is constructed, I just don't have an intuition of what I mean, this is B and this is Q and a proof is equivalent to saying we can go from one to the other and now this means I can warp this. in the other and I can have two different ways of forming these two very different and remote things each and now this thing will contain evidence that says that actually these two Amata peas are topically equivalent to AMA and I can know that I change this. two infinities is actually very difficult to visualize well, I mean, okay, but in the case of physics, for example, this is like saying, okay, you have these real evolutionary paths, we have the causal causal connections between events that happen, we have the causal connection we have the foliation x' which are connections between events and then, what do we have beyond that, what do we think about?
Well we couldn't have anything beyond that either, it's perfectly fine if we just have a tree category or a branch category or a nest Amanda, but I'm just curious if there is an interpretation of going to infinity, well that is What I mean, I think it depends quite a bit on your model. I think what's worth noticing here is the way we're going. to infinity is to define things inductively from the beginning, there is a convenient inductive way, so in the physics example you make me feel that there is a very big difference in how things from our zero level are interpreted and how the first level things. are interpreted because you say okay, the first thing is the causal structures and the second thing is the equivalences of casual structures that may not be themselves because of the structures, so I think that if we have an Infiniti category structure it is equivalent to asking if is there any way to Go ahead and think about the relationships and things between things and the answer may also be no.
Actually, I'd be a lot happier if the answer was no, because Infiniti category theory isn't the easiest thing in the world, so if you can avoid that, you and like Well, I've done it, but okay, the thing for me is that understanding these first three levels, you know, it's already fun, you know, it's already been interesting to understand these first three levels and I just have the suspicion, I don't know, it's just maybe it's a piece of aesthetics, so so to speak, that there is something to understand beyond those levels and maybe maybe it is, I mean, what Jonathan is saying is that the next level is the obstruction between slurs, between frames of reference.
Basically, maybe even they are getting better, the example was two summaries, but if you want to think about the same statement in terms of multi-way systems, it would be in an ordinary state graph or in a multi-layer system if you have two parts. through that state it's great if you only have two part specs and then you can find a mapping between those two parts that maps vertices on one path, diversities on the other, edges on one path, two edges on the other, the fact That you can construct that mapping doesn't actually tell you that those two paths are valid in the multi-way state graph, what do you mean?
I thought you said you started with two paths there, yeah, that's how they're generated, but let's say I just give you I know I just give you two paths and I say you know these are two path graphs and here's a mapping from one to the other. another from that information, then you cannot deduce that those two paths are actually valid paths with respect to the original. multidirectional systems that's right, that's equivalent to saying if I just give you a proof of equivalence between two proofs that in itself doesn't tell you that the actual expressions that those things are proving to be true in themselves are equivalent, yeah right, so , if there were an infinity or call a model for our observed physics models, what it would be would be an ultra generalization of the rulli or multiple path graph in which, if you could show that there are paths, if you could show equivalence between two paths, would necessarily show equivalences between all paths and all lick systems at sufficient lower levels of abstraction, including in particular the equivalence is between actual states at endpoints yes, I understand that's the kind of thing I'm looking for, yes , because this is an idea not constructed right here, if such an object exists, it would be its property, but we don't know it well yet, but I know it, but to say that it is a generalization, it is the last type of correspondence between description languages ​​in the ruler. graph it somehow that's what you're basically saying it's an ultimate right it's then it's a correspondence is it a type of equivalence that involves correspondence between equivalent between description languages ​​correspondence between update orders correspondence between causal affiliations and actually correspondences between States individual cases as special cases of whatever this more generalized type of equivalence is.
It seems like it's worth seeking out and that seems to be the ultimate prize of category theory, so to speak, in these models it will be understanding that because that's what you know in the original discussion here about category theory and its use as pattern to see different types of things seems like you know being able to see that everything we're talking about can be put into a framework seems useful, I mean, it's like me. I think our correspondence between, for example, physical space features, real branch space features, presents a rule space. I mean the challenge.
I've kept trying to understand what's okay, you want it, you want to get really weird here. what is a particle, a particle and a physical space, we understand what it could be topologically, what is a particle and a branch, the Hill and Jonathan space, here is one that we do not know. We have already talked about what is a particle in the rural space. I mean, that's an even stranger thing. Anyway, we should end here, but no, this was really interesting and I haven't completely undone it. my fear of category theory, but it might make sense, I think it's been going on for years.
I mean, it's by far the hardest thing I've ever had to study and basically when I started my PhD I didn't know anything about category theory and I was forced to learn it because everyone in my research group spoke category theory, so I have no other option, but if this were not the case I would not have learned it because I find that the most difficult thing in category theory is that it is not enough to understand the definitions, it really requires a change of mentality, you have to start asking yourself Not who are the elements here and what they do, but what are the relationships that this structure has with other structures and you have to adopt this perspective to be able to give the correct definitions and it is something very difficult to do.
I got to the point where I understood all the definitions and I feel like I didn't understand anything at all because and then at some point it clicks, so yeah. I think it's not easy to ask more questions about this. I mean, in a sense, the description you gave me makes me more sympathetic because you know I've been working in symbolic languages ​​forever, so there are things there where there are things. complicated abstract constructions that are, you know, functions that build functions that build them and for me they're pretty easy to understand, but you know, I've known it most of my life, so to speak, so you know, it makes me more understanding. . to people who find it difficult to understand those things my difficulties I understand a theory of categories, but if I wanted to, I mean in terms of concrete affine category theory, is there?
I mean, one of the best things about symbolic languages ​​is that you know that there is a concrete cognate theory of categories. instantiation of what's happening and you know, for category theory, I mean, is it the case that what you're constructing with your whole state box and so on is an attempt to make a concrete vacation of what It can be done with category theories, yes? partly obviously being a startup we are actively trying to create a product before the money runs out, yes our main goal is basically to create a product but yes we think the categorical perspective will help a lot.
They say we are particular about challenging category theory to the point that it is useful to us, but in general there is a good book on applied category theory by David Speevak and Brendan Fong. I think I think I may even have this book. I have tried. do my homework in this area, but, yeah, this one and I think it's pretty good. There is also a good book by Emily Real. You know, all these books are very full of examples, so okay, these categorical devices can be used to model them. It is a real world thing and by the way we also have state books that offer training courses in some categories for companies and that is also the perspective we take when we present an argument and shortly after present examples.
I think it's the only reasonable way to do it. understand category theory, especially for applied people, for pure mathematicians, my own suggestion is to continue studying McLane's book, which is a difficult book for sure, but at least it doesn't fool you. I remember that being one of my most disappointing things. What happened to me at the beginning is that I started studying category theory in this book, which was very easy and you know it would give you a lot of examples that seemed reasonable, but really the book was they just indulged in my set theoretical understanding. . stuff, so every time I was like, oh yeah, okay, I get this right, oh yeah, I got this and then on a page like 200, he finally introduces crossovers that really aren't easily understandable from an established theoretical point of view and At that point I realized that I literally didn't understand, I just wasted a month of my life thinking it wasunderstand suffering.
I didn't do anything, at least my statement is honest, it may take a week to read ten pages but when you read them I really understand what's going on so it's really slow and incremental progress and let's all see if I can fit it into a lifetime , so to speak, understanding how category theory works categorically is what you know very well, we should end here, thank you very much and um thank you, that's okay.

If you have any copyright issue, please Contact