YTread Logo
YTread Logo

Noam Chomsky, Fundamental Issues in Linguistics (April 2019 at MIT) - Lecture 1

Apr 07, 2024
Well, I would like in these two talks to talk about some

fundamental

questions, particularly the most important ones, I think, namely, what are the

fundamental

computational operations involved in the construction of syntactic objects and why these and not others. Turns out there's a lot to say about it since the last time I spoke here about some problem, some solutions. Oh, I'll address this in the course of the discussion as much as possible, but I think it would be helpful to start with a couple of more general comments, meaning what we're all trying to accomplish together and studying the language. many different ways to look at it.
noam chomsky fundamental issues in linguistics april 2019 at mit   lecture 1
These questions I believe are, in many ways, more important than particular technical results. They raised many questions about what is an authentic and genuine explanation. a genuine solution and what is a perhaps very valuable reorganization of the data opposed to the problems that often pose a solution but do not actually achieve these things are worth thinking carefully I think the basic questions have been formulated I think for the first time Once quite perceptively at the beginning of the scientific revolution in the 17th century, Galileo and his contemporaries, who raised all kinds of questions about received wisdom, also focused their attention on language and expressed their astonishment at the miraculous fact that with a couple dozen it was possible to somehow express an infinite number of thoughts and find ways to convey to others who do not have access to our minds everything that happens in our minds, so in their own words, which I prefer to quote, they were amazed by the method by which we were able to express our thoughts the wonderful invention by which using twenty-five or thirty sounds we can create the infinite variety of expressions that have nothing in common with what happens in our mind and yet allow us to express all our secrets and allow us to understand what is not present in consciousness, in fact, everything we can conceive and the most diverse movements of our soul.
noam chomsky fundamental issues in linguistics april 2019 at mit   lecture 1

More Interesting Facts About,

noam chomsky fundamental issues in linguistics april 2019 at mit lecture 1...

Galileo himself considered the alphabet to be the greatest dependence on human inventions because it had these amazing properties and also because, as he himself said, it allowed us to express all the wisdom of the times and contain the answers to any question we could ask, something as well as a universal Turing machine in our terms, the grammar and logic of the royal court, in reality, who was simply quoting a paraphrase of Galileo, had a lot of knowledge about logic and

linguistics

in its many forms the basis of logic modern there was a rich tradition that developed exploring what was called rational and universal rational grammar because it was supposed to provide explanations universal because it dealt with what was considered common to the common human possession of language sought explanations, including descriptions even of the vernacular language, which was quite surprising at the time, innovative but mainly universal explanations, trying to find what is common to all languages, this tradition continued for a couple of centuries, many contributions from the last representative.
noam chomsky fundamental issues in linguistics april 2019 at mit   lecture 1
About a century ago, Otto Jespersen said that his concern was how the elements of language come to exist in the mind of a speaker on the basis of a finite experience that produces a notion of structure that is definite enough to guide him in the formulation . proper phrase a crucially free expression that is often new to the speaker and the hearer and also beyond that to find the great principles underlying the grammars of all languages ​​I know that I think it is fair to interject interpret the tradition is often metaphorical vague but I think it is fair to extract from it the recognition that language is the capacity for language, just as individual languages ​​are possessions of individual people, they are part of a person, what was shared was recognized throughout the species without significant variation and recognized as exclusive to humans and Fundamental aspects of which the general program falls within the Natural Sciences within what today is called the biolinguistic program, of course, ran into many difficulties, the conceptual difficulties, the empirical difficulty is that the evidence was quite thin and no one really understood how to capture the notion, yes, the notion of the person. of structure in the mind what is that thing that allows us to develop constructions in our minds in many expressions and even find a way to transmit to others what is happening in our mind that is good to call the Galilean challenge that still exists well all this was swept away in the 20th century by structuralist behaviorist currents that very typically adopted a very different approach to language considering that the object of study was not something internal to the person but something external, so perhaps a corpus, an infinite set of expressions, some another external. formulation and you see this very clearly if you just look at the definitions of language that were given at the beginning of the 20th century by the major figures, so, for example, this is a language, this is a kind of social contract, these are collections, he put a collection of words. images in the community of speakers of Bloomfield letters languages ​​the expressions that can be made in a particular speech community for Harris is the distribution of morphemes and a movement towards the philosophy of language I say then why are there no languages ​​as he expressed it I am citing a web of sentences associated with each other and with stimuli by the mechanism of conditions answers elsewhere an infinite set of sentences David Lewis language and languages ​​also took only languages ​​a language is a set of sentences that is infinite both Coin As Lewis crucially argued that it makes sense to talk about an infinite set of sentences but not about a particular way of generating them, which is a very strange notion if you think about it, because these are the leading logicians and philosophers, you can't talk of an infinite set in a coherent way unless you have some characterization of what there is and what there is not, otherwise you are saying nothing more than the behaviorist.
noam chomsky fundamental issues in linguistics april 2019 at mit   lecture 1
The pressure of behaviorist beliefs was so powerful that the idea that there could be a privileged way to generate this infinite set was natural. It's crazy, Louis, but it's unintelligible, but whatever any of these entities are, they are outside the individual, the tradition was completely forgotten. people like yes, person, the last representatives were literally unknown. This good review of this by a historian of

linguistics

Julia Falk runs through the In some ways, yes, a person disappeared in the first half of the 20th century from the entire tradition, a long time ago also affects this day, even the historical scholarship of Linguistics is quite sparse, it barely recognizes any of the things I mentioned, so going back to the tradition forgotten in the mid-20th century was that there were clear ways to capture the concept of structure in the mind and the concept of person. tour other great mathematicians who established the tools to address the Galilean challenge something that I am sure I am very familiar with Yesterday's notion of structure becomes the spirit, the language, the internal generative system, the finite system that determines an infinite range of hierarchically structured expressions that express thoughts to the extent that they can be expressed linguistically and can be externalized and more remote systems, although we do not necessarily know that they are robust.
We can call this the basic property of language. To meet the Galilean challenge, there are several tasks that must be undertaken, the main one of course being trying to determine the internal languages, the AI ​​languages ​​of speakers of typologically varied languages, then an enormous task. the question arises: how a speaker selects a particular expression of inner language; then, how the expression one selected is externalized; conversely, how the listener internalizes the externalization; The last two tasks are input/output systems that we have understood. how to study them and a lot has been learned about them over the years the first one is how the speaker selects a syntactic object from an infinite array that is a total mystery there is nothing to say about it that is true for voluntary behavior in general so actually Some of the two leading specialists in the neuroscience of voluntary action, Emilio Beattie, Robert and Jamie, wrote a cutting-edge article a year ago in which they discussed, say what they know about voluntary movement, simple things, not the language, simple things. like raise your finger you know what they said they'll put it like they said fancifully we're starting to learn about the puppet and the strings but we can't say anything at all about the puppeteer so how do you select what you're going to do is still those kinds of questions that we don't even know can intelligently propose in the sciences at this point, their will is that the language that follows tradition is a property of the individual and also of the specific species.
The faculty of language is also an internal property. something that allows language I to be acquired and has to meet a couple of empirical conditions of two conditions that are in conflict with the conditions of learning and the conditions of evolution, so whatever the Language Faculty is must be what rich enough that by possessing it, a child can acquire language I from the scattered and limited data available and has achieved the internal system that has all these rich and complex consequences, so it has to be that rich, but it also has to be simple enough that it could have evolved and now we can be a little more specific about it because some of the conditions of the evolution of language are coming to light and time and evolution has than meeting those empirical conditions, well, those are the conditions.
For a genuine explanation, if someone proposed a descriptive device that satisfies these conditions, then it is the basis for an explanation to address the Galilean challenge as it was formulated and developed in the tradition of rational and universal grammar. The general explanation is always at the ug level. theory of the Faculty of Language and has to offer some perspectives of satisfying the conditions of learning capacity and evolution of the capacity, that is a fairly austere requirement, a very austere requirement, but it is the correct requirement, anything that does not reach Explaining things can be very valuable. maybe organizing the problems in an interesting way move on from there, but still don't get to a general explanation which I think we can now understand more clearly what is actually a genuine and genuine explanation, something that wasn't really possible in earlier stages of linguistic research, but again in any The device that is introduced to account for something, unless it can meet these joint dual conditions, lacks explanation, it can be very valuable, so many examples take a concrete example to To illustrate something, to which we will return later, if there is time, an interesting article by Boscovich, whom everyone knows. knows the coordinate structure and the constraints of the attached islands and what he points out is that each of these constraints poses many problems and many mysteries, but his article is an effort to try to reduce the mysteries by reducing both constraints to the same constraint using the neo device.
David Soni and event semantics which interprets a union as a type of coordination, so you can reduce both problems to the same coordination problem and then we still have the mysteries, but now it's a simpler problem, a set of mysteries instead of two independent ones and he tries to show that the problems are reduced in this way, well, that is a step forward, it leaves the mysteries in a better position and for a productive investigation, but it is not an explanation, he is very clear and I think if you look at the field, practically all Chiva is a partial step forward in this sense, there are very few exceptions that barely come to light and I think they can be considered genuine explanations, they are important in themselves and they are also a kind of guide for how we should think about proceeding and then they also tell us something about how far it is possible to go is not so obvious Ingham much beyond the kinds of explanations that are now starting to come to light I will talk about that well, actually the The first works and a generous grammar attempted to meet even more austere conditions and were heavily influenced by the work of people like Nelson Goodman and we have equines who were working on what they called constructive nominalism.
No very austere sets, just you know, mural logical concepts of a very limited type that were too austere at least for the moment, so it was abandoned at least for the moment, maybe even come back to it one day and attention focused on something else, namely the wide range of empirical data on all kinds of languages ​​that were beginning to appear as soon asThe first efforts were made to write real generative grammars turned out that everything was baffling and complex, nothing was understood, they are just massive puzzles, a big change from a few years before, during the period of structural linguistics, it was basically assumed. that everything was known, everything was resolved, but the methods of analysis could be formalized, the only thing that was needed was to apply them to one or a language that turned out to be radically false, well, the first proposals, as everyone knows, were dual.
There were operations to address the problem of compositionality. Very structured grammar and totally different operations to address the phenomenon of dislocation. Ubiquitous phenomenon. Transformational grammar. Both systems were too complex to achieve the long-term goals of a genuine, well-understood explanation. The assumption that at that time remained open for a long time until today is that the principles of compositionality are natural, you can expect them to be something like for a drummer conductor, but dislocation is a strange property that languages ​​have a kind of imperfection that we have that somehow languages ​​for some reason have these formal languages ​​would never be built with that property and that is still a widely accepted opinion.
I think that's exactly the opposite of the truth. The opposite, I think, turns out to be true. That more recent work suggests that a dislocation is a kind of null hypothesis, it is what is expected from the simplest point of view and it is the most primitive operation. I'll come back to that, but let me take a quick look at the steps we're taking. I think I'm slow to come to this conclusion on the sixties, limited sentence structure grammar. Phrase structure grammar is too rich to be considered relevant for describing languages, so there is nothing infrastructure in the theory of free grammar storage that prevents this.
Let's say that by having a rule, you know, VP arrow in CP, let's say that finding Fraser's rule doesn't make any sense, it was just assumed that you can't do that kind of thing, but the correct theory must rule it out as unacceptable and that step . was taken in the late 60's, basically the x-bar theory had interesting consequences that were not fully appreciated at the time, they are obvious in retrospect for one thing, the x-bar theory notice has no order linear, so the Japanese and English say that more or less mirror images have roughly the same linear orders of x-bar theory somewhere, which was a step towards something that I think is much clearer now , that is, the superficial order of expressions is not strictly part of the language, it is something else.
I'll come back to that, but if you just look at x-bar theory, it's already a step in that direction. Another thing about x-bar theory is that it forces a parameter theory, so the Japanese in English say differ and they are going to differ on some choice that is not determined by x-bar theory, so Speaker and listener who use a linear externalizing system do not have to use it, but if they are using it, they will have to make a choice as to the order in which to externalize the internal system, so the bar theory x itself is first a step towards the separation of a linear order and other superficial organization of what we could consider as a poor language I from language I.
This has to do with the Galilean challenge of constructing the set of linguistically articulated thoughts by putting the outsourcing and some oi media and I think the pictures are becoming clearer, we will come back to that, well there is also, along with the clear progress of the x-bar theory, there were very serious problems that were not recognized at the time, the main problem is that it excludes the possibility of EXO centric instructions, everything has to be endocentury in x-bar theory and that is simply false, there are extracentric instructions everywhere, simple things like subject predicate or indeed all cases of dislocation without exception, all of these give you or something like VP but that's just a stipulation, you could also call it NP and this runs through the entire descriptive apparatus, so there was a serious problem that wasn't really recognized until a couple of years ago, my feeling is that it was already finished, it's pretty much surpassed by labeling theory, which tells you in a principled way in terms of minimal search, simple computational principle, when is when the internal fusion movement can take place, when it should take place. when it doesn't need to take place and there are a lot of interesting results and a lot of interesting problems about this, a lot of very intriguing material, most of which I assume you're familiar with going into the 1990s, didn't seem to tell us that we have already learned enough, so it might be possible for the first time to face the problem of genuine explanation, that is what is called the minimalist program, following that program, if you want, if you want a genuine explanation, you want to start with computational operations. that meet the learning ability conditions and evolve learning ability well, the easiest way to meet the learning ability condition is to say that learning ability 0 is simply innate, nothing.
I'm the easiest way to meet the evolvability condition would be to say, let's find the computational principle that had to evolve, there was no way it wouldn't have evolved well. If we look at those two conditions, they are satisfied with the most elementary computational operation. , what has been called fusion in recent years, which by the way has many problems that I will come back to, but basically just the operation of forming a binary set, it has to be there because the basic property exists, okay and that means that at least the simplest operation must exist, perhaps other more complex ones or at least the simplest one. one, so we know it has to exist, it had to evolve for it to meet the evolvability condition, oh, that leaves the question of how it happened and what the neurological implication is, but whatever the answers to those questions are, This is an operation that had to evolve. and having evolved it's an 8, so it meets the learnability condition, so if you can reduce something to that, you have a genuine explanation, that's as far as it gets, okay, if not, if you can't go that far, it's a description not a genuine explanation again, this is a pretty austere requirement, but I think it's the one we all have in mind when we think about the goals of our efforts and dig deep into the language, so I won't do it.
Give the details because I think you are familiar with them, but the simplest computational operation then merges the formation of binary sets a meeting in which the no manipulation condition is the least possible calculation does not modify the elements does not add more structure interesting things that say About this I will return. Oh, there is a lot of current literature that attempts to show that this operation can be accomplished in steps. That is incoherent. You cannot have a partial formation of binary sets. It cannot be achieved in steps. either you have it or you don't, it doesn't get any simpler again, there is a lot of literature on this but it's beside the point, there is actually an interesting recent article by Greeny Hoyt Births which looks at some of the recent proposals and shows why they do not work.
It makes sense, but if you think about it, they can't make sense. The simplest case of fusion will have at least maybe at most. We would like to show at least two cases where one of them is merged externally when taken separately. things and form this internal fusion when you take one thing into something inside for me the set of those are at least the two the two simplest possibilities notice that there is only one operation there are not two operations only one operation with two There is a lot of confusion about this and literature, but that should be obvious if you think about it, notice that this is all this program is a program, it is not a theory, the program is to see how far we can go if we take the simplest one. possible operation and try to give genuine explanations in terms of it, perhaps that will be impossible, perhaps we will have to find more complex operations, but in that case it will be necessary to demonstrate how they can be acquired, how they can be learned, that they could now have evolved and that is.
It's not so trivial, you can't just say well, natural selection does whatever I like, you know, that's not an explanation, you can, you have to give a real explanation, very difficult in biology, and in the biological literature it is pointed out which is diabolically difficult, Eddard's phrase. It accounts for the evolution of almost any trait, even the simplest ones, like having blue eyes, for example, and it's not the kind of thing you can wave at, so you can try to meet that condition or recognize that you don't have them. . Well, I think there have been substantial achievements in recent years and in trying to get general genuine explanations, they have problems.
I want to come back to the problems later, but I'll leave them on the shelf for a moment, the only achievement. What is not trivial is to unify the two traditional types of operations, a compositionality and a dislocation, they are unified once the simplest computational operation is maintained, far from being an imperfection, as I always assumed in particular, a stipulation would be needed bar dislocation if you don't have any stipulation and you also get dislocation, as I mentioned before that is possibly the simplest case of fusion, you can't actually have just one and not the other because once you merge both, but if you are looking for one that is more primitive, it is probably an internal merge, the reasons for this are quite simple, external merge requires a huge search to put together two things that are separate, first we have to search the entire lexicon, then we have to search everything what has already been built and perhaps is. sitting there somewhere waiting to be merged with the inner merge, it has almost no quest at all, so one reason for the inner merge is that this location is more primitive, it just doesn't require a small fraction of the quest, but there is a lot more than that.
There are some interesting suggestions and the literature is not definitive, but it is suggestive. One of them was work done by Marv Minsky a couple of decades ago, he and one of his students simply explored what would happen if you took the simpler Turing. machines with the fewest number of states, the fewest number of symbols and just let them run free, you'll see what happens, what turned out was kind of interesting and most of them crashed, went into infinite loops or just stopped, but the ones that didn't they all failed gave the successor function no, what is the successor function?
Well, one thing the successor function is is the inner merge, so if you do a merge and you have a one-member lexicon, just run three to get the successor, that's Minsky's argument at the time was that probably the evolution in the The course of evolution, nature found the simplest, that's what you would expect, so it found the successor function and it turns out that it is an internal fusion, an external fusion. If you look at others or down to the insect level, they have a count, so they are ants. they can count the number of steps that have been taken, they have a count or perhaps a set of counters inside them and if we just look at the mathematics of the successive counters, they tend towards the successor function, it doesn't take a big step to move them. to the successor function, so from several points of view it seems plausible to think that one of the central operations of the most primitive is actually dislocation, contrary to what was always thought and, as the constructions are enriched, we have an external fusion and it is richer or types of languages, clearly we have it as a natural language, it is not just an internal fusion, interesting questions why that probably has to do with the structure of the argument that is only related to the external fusion, Let's get back to that, well, what about the unification of internal and external fusion, compositionality and dislocation?
What the x-bar theory suggested, as I mentioned before, becomes much clearer and explicit, so it seems that the generation of the CI interface is sometimes called LF, which is automatically interpreted, the linguistically articulated thoughts that we can call or I language, and that simply maintains the structure without linear order or other types of arrangements, so why is there linear order in spoken language? By the way, not strictly in sign language, so in sign language, which we know is essentially equivalent to spoken language, there is a different dimensionality, so visual space can be used. you can use simultaneous operations, gestures and facial movements, notIt is strictly linear, it makes use of the contingencies allowed by the space of exteriorization, but speech turns out to be linear, you have to chain words one after another, so if you choose that particular modality of exteriorization, yes, you are going to have a linear order, but does linear order have anything to do with language?
You know, multiply what you think you want to call language, but what it really has to do with is an amalgamation of two totally different independent systems, one of them internal language the other a particularly sensory motor system that has absolutely nothing to do with language sensory motor systems existed for around hundreds of thousands perhaps millions of years before language appeared they do not appear to have been affected by language at most there are very minor suggestions about slight adaptations that might have taken place for the changes C of the alveolar ridge and the click languages, there are some very small things, but basically the sensory motor system seemed independent of the language, but since if you externalize the internal system through this filter, you will get a linear order, but strictly speaking, that's a property of an amalgamation of two independent systems, and in fact that's true for outsourcing as a whole, and notice that outsourcing runs up against a difficult problem: you have two completely independent systems that they have nothing to do with. . each other they have to match in some way they can expect the process to be quite complex and also variable they can do it in many different ways also so that they are easily mutable they can change from one generation to another under slight effects putting all these expectations together with what is a natural expectation, I don't think it is becoming more and more imaginable.
Perhaps it is true that the variety, complexity and capacity of language are basically a property of externalization. Action is not a property of language itself and it might turn out to be true at all times, that the online core, which is really unique, may not vary from language to language, in fact that is more or less tacitly assumed by essentially all work on formal semantics. and pragmatics isn't supposed to be parameterized from one language to another or learned in any way, it's just there, you know, which means that if we ever understand it correctly, it will be reducible to elementary calculations, which you just don't know.
Don't worry, that's how they work. The internal system works, that should be the goal of research in those directions. I should say just as a terminological point, what is called formal semantics is actually a form of syntax, it is symbolic manipulation, technically something becomes semantic when you relate it to the external world and that is complicated. Business, even things like, say, even calculus, if you think about it, events are really mental constructs, you can't find them in the outside world, and the task of relating what is internal to the external world, dealing with questions of reference, it is not a trivial matter.
I can see that a goal for all this work is to try to reduce it to computational operations that meet the conditions of a genuine explanation on a very austere criterion, but I think it's worth keeping in mind. Well, these are all possibilities that I think are more and more plausible that the fuel could go in that direction, the discovery will be very surprising, if it really works well, let's continue with genuine explanations, one of them is the dislocation, linking it with the compositionality, and note that that automatically includes the basis for what is called reconstruction. to the no-manipulation condition, you automatically get what is called the motion copy theory, which is the basis of the complex properties of the reconstruction.
There's a lot to dig into, but that's essentially the basis. You don't need reconstruction rules, it's just automatic. Well, of genuine explanations, I think the most interesting case is the ancient principle of structure dependency, which was discovered in the 1950s. This is a really strange language principle that had never been noticed, namely that The rules and operations of language are what produce the interpretation of sentences. They don't pay attention to linear order, they only deal with structures, which is extremely disconcerting when you think about it because linear order is what you hear, it's one hundred percent of what you hear. that you hear, you never hear structure, furthermore, at least superficially, it seems that Calculations about linear order are simpler than calculations about structure from another point of view which turns out to be false, but at least superficially it seems correct, so I do it.
What seems to have always seemed extremely puzzling is that the rules that govern syntactic rules and the rules that perform semantic interpretations do not pay attention to one hundred percent of what they hear or to the simplest operations, which is a rather disconcerting fact. and now we have a simple explanation for it, it follows from the simplest computational operation if all the internal language is based on the calculation of the simplest fusion operation in its simplest form, you automatically get a structural dependency for the movement operations , interpretation or interpretation of everything else. I will not analyze examples.
I assume you're familiar with them, but that seems to be a fact about all constructions and all languages ​​oh, it's that if it's correct it's a genuine explanation of a fundamental property of language, perhaps the deepest property of language, which at core language simply doesn't care about order, it only cares about structure and a Language learning children simply ignore everything they hear. There is interesting independent evidence to support this conclusion. Studies have been carried out on language acquisition and in very sophisticated ways, so far it has not reached the point where it has been shown that 30-month-old babies already observe the beginning. of structure dependency that there is almost no data, I remember it and it is a very abstract principle.
There is other earlier work, work by Steve Krane Nakamura, who has a lot of evidence that three-year-olds have mastered a study. Recent studies have reduced it to 30 months if we have better studies. which as they continue to improve it will probably be sooner, what that means is that you were just born with it, so it meets the condition of learning capacity, that is, zero and has the condition of evolution capacity. You have to have this particular operation at least maybe more, but at least this one because you have the basic principle, well there is also, as many of you know, neurolinguistic evidence, the studies inspired by Andre Moreau of a group in Milan mousou and others have shown that many of you know this that if you present subjects with invented systems of two types, one that corresponds to the rules of a real language that the subjects do not know, the other that uses things like linear order, you get different types of brain activity in the case of, for example, having a denial, be the third. word in sentence very trivial operation you get diffuse brain activity if you follow what seemed like more complex rules of real languages ​​you get activity in the expected language-specific areas, Broca's area of ​​the brain, and that has already been replicated many times. a pretty solid result there is also psycholinguistic evidence of other kinds musso's moral experiments were actually suggested by the work of neil smith and he simply on a topic they have been working with for many years a young man they call chris extremely cognitively limited almost no abilities, but tremendous linguistic abilities, learn languages ​​like inhale, you know what that means, like a sponge, you know, words just pick up immediately and Neil tried these Neil Smith tried these same experiments before that, neurolinguistics were already facts. he just tried it with Chris and it turned out that when he was given nonsense language inspired by a real language, he learned it easily like any other language, when he was given very simple language, things like negation is a third world word that could not. handling it was just a puzzle he can't deal with riddles that's what inspired black linguistic studies.
I think this is the most interesting discovery so far and language-related brain science is a direction where other experimental work could go well looking back. This seems to be one of those very rare cases where there is converging evidence from all directions leading to the same conclusion that poor eye language is simply independent of linear order and other arrangements. They have linguistic evidence. Cycle of psycholinguistic evidence or linguistic evidence. Evolutionary considerations. Whatever you can think of now there is a very curious fact, there is a huge literature on computational cognitive science that tries to show that somehow this principle can be learned, which is a very strange fact, if you look at it, it's like trying to find a complicated way to refute it. the null hypothesis, things like that just don't happen in science.
I mean, here you have an absolute optimal explanation and a huge literature that tries to show that there is a very complicated way in which perhaps we can reach the same conclusion: it is a company that something meaningless at the bottom, of course, when you look real cases it never works, it won't work if it worked, it wouldn't make sense because you always ask the wrong question and I suppose you could prove it with the detailed statistics you know. Analysis with recurrent neural networks and so on of many layers of, say, the Wall Street Journal, you could find evidence that a child could have used a thirty-month-old child to discover that she has structural dependence; he won't find it, of course, but even though there is literature that claims it, but if you did find it, it wouldn't make any sense, of course, the only question is why is that so?
Why in every language and every construct this is the way it works if you could find a way to show well, this is how it could work in this language doesn't tell you anything, it's answering the wrong question and furthermore, as I say, is trying to find a complicated way to disprove the null hypothesis, the whole enterprise is completely absurd, in reality it is probably the most effort and calculation. Cognitive science to try to find a basis for some linguistic principle. A large amount of literature on new articles continues to appear. Something very strange: articles that try to show that, as they often put it, you can obtain structural dependence without what is sometimes called an inductive bias to structure the defendants, but there is no inductive bias, it is just the null hypothesis , they don't make assumptions, this is what you get, there's no bias, it's just given, so I think an interesting question about the many interesting questions about how linguistics is done, but one of them is why things like this?
Go on, I think we're thinking there are other successes, but what I'd like to do is address the problems. There were a lot of problems with the merger and there are some that have a solution, so one problem is anything you mentioned about EXO and cetera builds. a and B VP are needed, let's say Dominica is here, in your honor or something, the predicate internal subject you put together or subject Ana, an NP and a VP, NPS are often called DP, so go back to them, I think it's probably a mistake. Let's just call them noun phrases because when you have a noun phrase and a verb phrase, you put them together or that gives you the basic theta structure.
Well, the noun phrase and the verb phrase have to be built independently, which means you have to have some kind of workspace, something that Jonathan pointed out years ago, you have to have some kind of workspace that you're building on. these separate things and if you think about it across the workspace it can proliferate not indefinitely and get bigger where you're just doing parallel things. and put them together in a way that means the merge operation really shouldn't be revised to become an operation on workspaces, not on two elements x and y, it's an operation that changes one workspace to another and then the question arises how does it do that? .
I know, I have to say I'm very glad to be back in a nice low-tech institution like mine, with whiteboards and no PowerPoint no projections, and that's what they have in Arizona, where we are here, so what we want is some kind of operation that says it's called capital merging, we'll look at its properties that takes two things, call them P and Q, guys were going to merge a workspace and it turns it into another workspace, so what's the another workspace? Well, it will include the P Q set of the two guys. actually let me use a different notation for the reasons mentioned for a workspace to be a set, but we want to distinguish it from syntactic objects that are sets so that a workspace doesn't merge with something, so just for convenience only use a different notation, so the new will include the equalizerconfigured and a lot of other garbage and the next question is what is the other garbage in the workspace which turns out to be a non-trivial question, so what are the terms on what is the answer? let's take the simplest case, the entire workspace consists of two elements, a column a and B, that is the workspace and suppose we decide to merge them to get the new workspace which includes the set a B and includes something else, For example. includes a and B, well if we think about the way recursion works in general, it should include a and B, so if you're doing proof theory, okay, you generate a proof, you would build the line from axioms and stuff above and you can continue.
Go back to that line, if you want, you can always go back to whatever you have produced and is ready for the next step, but there is good reason to believe that four organisms and particularly humans do not work that way and you can see that if think what would happen if you made time for this to happen, well, suppose you allow this, then you could continue building a much larger object here, including a B as a term, but it could be of arbitrary complexity, you know any type of complexity equally . and then you could take a and fuse it with it and get the constraints on this location will be violated, no matter how radical the violation, well, that tells us something, it tells us something surprising and I think significant than the kind of recursion that takes place in human language. and probably organic systems generally reduce the number of the set of elements accessible to calculation as strictly as possible.
Well, let's give it a name and call it resource constraint. It seems very general. This is simply the first example if you think about it carefully. works for millions of things, the same refutation model eliminates the entire set of possible merge extensions that have been proposed over the years. Go back to the examples, but you realize what the problem is, if you allow the normal type of recursion. You know no restrictions or limits, then you will discover that through legitimate means you can obtain illegitimate objects. That has to be prohibited. You can generate all kinds of deviant expressions.
That's not a problem, but you don't want to have it. legitimate means of generating things that violate all possible conditions descriptive condition anything like that is wrong in this case and it turns out that in many cases this result can be prevented simply by limiting the resources that are available now what are the resources? They are elements that are accessible to operations. so the actual condition says limit the accessibility, okay, keep the accessibility as small as possible, we already have examples like that that we are familiar with, one of them is the phrase impenetrability condition, okay, if you think about what the condition says, it basically says when generating something, you reach a certain unit, a phase talks about what it is, anything inside the phase will no longer be accessible to operations, okay, that It reduces the amount of computational search required, but it's a way to limit accessibility, okay, it says that. things down there are no longer accessible another example and this may be the only other example is minimal search this is what is often called the third factor property the third factor for those of you who are not familiar comes from the simple description of The elements that go into computing to learn to say what it means to acquire a system are three things: external data, internal structure and basically laws of nature that are independent of the issue of the system, so if you are studying the growth of arms , say that humans grow arms, not wings, partly because of nutrition for the embryo and partly, in fact, largely due to internal structure, only genetic determination and largely simply due to way physical laws operate, there are only certain ways organisms can develop, other ways are simply not possible.
If you put them together, explain any kind of growth and development the same goes for the language, there is external data or whatever that will determine whether you end up with the call log or the internal structure in English that at least includes merge or Modell, but at the same time minus that and, in fact, anything that can be explained in terms of that does produce a genuine explanation and then the laws of nature, what are the laws of nature? language is a computational system made unusual that is rare organic nature may be unique even without counting counters, but anyway that is what language is among the laws of nature that would be expected would be things like computation, eliminating computational complexity, making things as simple as possible.
Several reasons why one of them actually goes back to Galileo again and one of Galileo's precepts was that nature is simple and it is the job of scientists to prove it, whether by following objects or the flight of birds or flowers or what to be. It's a kind of prescriptive hypothesis, you can't prove it, but it has been extraordinarily successful, in fact all the success of science in the last 500 years is based on it and that is reason enough to assume that it works for us too. It is reasonable to accept that there is a general point that simply has to do with the nature of the explanation.
It's just a fact about explanation that the simpler the assumptions, the deeper the explanation. that's just a kind of logic, which is why there are many converts. I must say that in the case of language there is another reason to believe it that is exclusive to language and has to do with the conditions of the evolution of language, we know very little. I know evolution completely, you know, like I said, trying to really explain the development of any particular trait is very difficult even in simple cases, in the evolutionary psychology literature, you know, everything seems easy, you know, it happened by natural selection , why not, but when you really try to explain something, it's difficult in the case of cognitive development, it's uniquely difficult because you don't have fossil records, okay, there are no recordings of people, we're doing whatever they were doing 100,000 years ago.
Furthermore, when it comes to language. in particular, it is very difficult with other organic systems, say vision, you can have comparative evidence, you can study cats and monkeys that have essentially the same visual system and with them, rightly or wrongly, we allow ourselves to do invasive experiments in order to insert a neuron. a cell and the striate cortex and see what is happening, you can and you learn a lot from that, that's how we know about human vision, but in language you can't do it because there is no other analogous system, it is a unique system, nothing analogous in the organic world. so there is nothing to prove, okay, it is exceptionally difficult and yet there is some evidence, the evidence in Bob or we can, I already have a book that reviews it, there are better elements than what we had in the book, already There is genomic evidence that a humorless ap ins that began to separate approximately two hundred thousand years ago, that is when the Sun people in Africa are separated from the rest.
Interestingly, they have unique ways of outsourcing. These turn out to be essentially all and only languages ​​that have complex clicking systems. There are what apparently appear to be some suggestions for exceptions, but they appear to be borrowings or something accidental. There's a very interesting paper published recently about this, so what we know, one thing we know quite convincingly, is that approximately 200,000 years ago. It makes humans start to separate, they shared the Language Faculty at that time, so there is no known difference between the Language Faculty of on people and everyone else you know, no one knows any difference.
Group differences in language ability. It turns out that there is a different form of outsourcing that suggests and then try Greenie goes into this in detail in his article that these particular forms of outsourcing were developed later as a matter of logic, the internal system had to be there before they were developed. could externalize, that's not debatable, but it suggests that there is a gap when the system was there about 200,000 years ago and it started to externalize in different or somewhat different ways later, when did Homo sapiens appear? Well, here we have a reasonably good fossil record showing that anatomically modern humans arose around that time, perhaps 250,000 years ago. years ago, which is essentially nothing in evolutionary time, so it seems that language emerged virtually alongside Homo sapiens with the language faculty intact.
Another type of evidence comes from the archaeological record that provides much information about rich symbolic activity. It turns out that almost all of the rich symbolic activity anyone has unearthed so far postdates the emergence of Homo sapiens. Well, the rich symbolic activity has naturally been taken as an indication of the existence of language and also more complex social structures, as funerary practices are known. Putting it all together, it seems plausible that language emerged suddenly in evolutionary time alongside Homo sapiens. Whatever changes gave rise to Homo sapiens seem to expand language along with it and apparently haven't changed, as they are independent reasons to believe that whatever it is in is probably very simple along with, say, the yellow earth precept and the general principle, if you want an explanation, you want simplicity, so it makes sense from many points of view to assume that the relevant laws of nature here or computational complexity, so computational efficiency, that's what it means to call it property of the third good factor, a particular case of computational simplicity hello, how do you do it? myth that neural networks are what do the calculation, but there is pretty good evidence that that is not true.
Neural networks simply do not have actual computational neurons. I'm talking about real neurons. The actual neurons may not be the elements that go into the calculation. now there is reasonably strong evidence against that Randy Galveston's book with William King is a very good case that he is strong evidence for Randy that if you look at neural networks you just can't find the building blocks of machines essentially of Turing, you can't find the core type of The computational element that produces computational activities is not present in neural networks, so what he argues is that people who have been looking for accounts of computation in neural networks are like the traditional type , a blind man looking under a street lamp for his lost keys because, although we missed crossing the street because that's where the light is, so we do know something about neural networks, but it happens that what we are looking for elsewhere there is also a lot more evidence that the speed and scale of computing are far beyond neural.
The networks are capable of doing and when others, including these particular proposals, is that the calculation is actually done at the molecular level, I mean doing with RNA, etc., there are other proposals from not inconsiderable people like Roger Penrose, for example, that the calculation is being performed. by structures that are internal to neurons that have a much greater computational capacity, there are chemical processes that occur in the brain that are not captured by neural networks, which is why it is known since Helmholtz that the transition speed of neurons is It is too slow to do much, so it looks like we're going to have to look elsewhere to find the implementation of computational systems, but there is something there and it will be a property of the third factor, something related to the brain.
Clearly, we talk about this in our book, in fact, so yes, we're certainly going to try to relate whatever is ultimately happening down to the cell level. You know, that's science. Try to reduce everything. Okay, so that's the third factor, if you will. but talking about neural networks is like talking about natural selection, yes, I thought you mentioned neurons, something in the brain, yes, yes, surely something in the brain is responsible for this, not the foot, let's say you can amputate your leg and you still have language. in the state of your head, you don't, so yes, we agree that something is happening there, it's a complicated question, it's a very difficult question even for simple traits, not just for language, very simple traits, As I said, look in technical studies of evolution the phrase that is used frequently is fiendishly difficult to find the evolutionary basis for even the simplest traits, so to think that we will suddenly find them in less misleading languages, there are some interesting ideas, so Angela Ritchie's book that came out recently is the first MIT book on the state of the art andneurolinguistics are interesting suggestions about what might be involved in the probably small range change in brain wiring that led to the appearance of fusion or something, you know, closing a certain circuit and the dorsal and ventral connections, it's a interesting proposal without a doubt. it's not going to be trivial, you know, it's a difficult problem, yes, that would be, so no, no, he's always been very disruptive, well, maybe it'll end here and continue next time, but a principle is that what It is expected, for many reasons, it is computational efficiency, minimal searches are the strongest form of computational efficiency, sir, as little as possible and there is a case of accessibility restriction that we are familiar with and which I introduced minimal search, that is the success story. of cyclic motion, so let's say you've taken a WH phrase and moved it here and let's say that none of these, let's say, are locked by the phase impenetrability condition that just locks things here, well, next thing you know poses Isn't this that one?
Okay, we take it for granted, no one talks about it, but if you ask why it's again a minimal search question, whatever the operation is selecting, argue about what it is that goes back to that mystery I mentioned, will take this. guy because that's the one that will be found by minimal search, so we already have at least two cases that we are familiar with of accessibility limitation, one P, I see that it is quite broad, the other, minimal search that we just considered sitting and maybe that exhausts it, but now I think there is a broader general principle that just restricts resources and we will have a lot of effects.
Oh, let's go back to where there were examples of that next time, at this point there's a temptation to relate resource constraint to something that Ray and I were talking about the fact that the brain is just slow, the work done works quite slowly and There are many domains where that is the case, so in many ways the most surprising is vision if you look at the sensory motor system. the visual system the cells of the retina actually respond to individual photons of light well they are at their maximum they give you a maximum amount of information the brain doesn't want all that information it's just overloaded if it ever had that kind of information inside, so whatever the visual system is doing, the first step it's doing is throwing away almost all of the information that comes from the retina and apparently every sensory motor system, every sensory system is like that, the first thing it does is just throwing away mostly and try to come up with something limited enough that this slow brain here can deal with it in some way that looks a lot like a general property whose resource limitation of the type that says don't do ordinary recursion but restrict the resources of that this is a special case, everything seems to converge plausibly, we are very familiar with this in the study of language acquisition, so, as everyone knows, when a baby acquires a phonetic system, they are basically wasting information, they are wasting tons of information in the first few months of life and maybe at about nine months or a year, the same thing happens during the rest of language acquisition.
If you look at something like Charles Yang's general approach, language acquisition where you're just changing starts with the beginning of the child. all possible grammars I languages ​​and then the changes the probability distribution of them as data appears, reduces the probability of things that have no evidence of them so that they become fundamentally invisible. It's also a matter of throwing out a lot of information and converging on just a little bit. The brain is constantly losing neurons, but you don't want all this junk around you, you want just what you need, and resource limitations fit quite naturally into that. that system.
Oh, I think I'll stop here and try to come back to more detailed examples next time. unless someone else wants to interrupt you

If you have any copyright issue, please Contact