YTread Logo
YTread Logo

Language and the Mind Revisited - The Biolinguistic Turn with Noam Chomsky

May 31, 2021
(soft music) - Almost exactly 35 years ago, I had the opportunity to give several conferences here, to the same audience I believe, on the topic of

language

and

mind

. And in the intervening years a lot has been learned about

language

and the brain, hence the

mind

in the sense that I used the term then, the term mind, and mental and similar terms. And those lectures, (audience member speaking without microphone) Sorry. Oh thanks. (everyone laughs) And we will continue to use them now, it's always good to have a friend in the audience. (Audience laughs) I am using these terms simply as descriptive terms for certain aspects of the world, more or less on a par with descriptive terms like chemical or optical electrical etc.
language and the mind revisited   the biolinguistic turn with noam chomsky
These are terms used to focus attention on particular aspects of the world that appear to have a fairly integrated character and are worth considering for special investigation, but without any illusion that they cut nature at its joints. In those previous lectures, I assumed that human language can be reasonably studied as part of the world, specifically it is a property of the human organism, primarily the brain, and for convenience, I will stick to that. Both then and now, I am adopting what Lyle Jenkins in a recent book calls the

biolinguistic

perspective, which is the framework within which the approach to language I am considering was developed some 50 years ago.
language and the mind revisited   the biolinguistic turn with noam chomsky

More Interesting Facts About,

language and the mind revisited the biolinguistic turn with noam chomsky...

Also for convenience, I will use the term language to refer to human language, which is a specific biological system; There is no meaningful question about whether the communication system of bees or what apes could be taught, or mathematics or music, there is no question about whether these are languages ​​or whether airplanes really fly or submarines really swim. , or whether computers think or translate languages, or other comparably meaningless questions, many of them based on a misinterpretation of an important article by Alan Turing just over 50 years ago in 1950, which has generated a large literature and, in my opinion, mostly wrong, explicitly, despite Turing's very explicit warning not to go in that apparently overlooked direction.
language and the mind revisited   the biolinguistic turn with noam chomsky
From a

biolinguistic

perspective, language is a component of human biology more or less on a par with mammalian vision or insect navigation, and other systems for which the best theories ever devised attribute computational capabilities of some kind, which is informal. The usage is sometimes called rule following. Thus, for example, contemporary texts on vision describe the so-called principle of rigidity, which was formulated about 50 years ago as follows. That means that, if possible, and other rules allow it, interpret the movements of the images as projections of rigid motion in three dimensions. In this case, later work provided substantial information about the mental calculations that appeared to be involved and when the visual system follows these rules in informal terminology.
language and the mind revisited   the biolinguistic turn with noam chomsky
But even for very simple organisms this is not an easy test. In these areas, which are quite dark even for insects, many problems remain unsolved. The decision to study languages ​​from other parts of the world, in this sense, in my opinion should not be controversial, but it has not been; On the contrary, the assumption that this is a legitimate company was rejected quite strongly 50 years ago and continues to be so. Virtually all contemporary philosophy of language and mind is based on the rejection of this assumption. The same applies to what is called the computer model of the mind that underlies much of the theoretical cognitive science denied in this case not only for language but for mental faculties in general.
It is explicitly denied, and technical linguistic literature and what is called mystical plate, accounts of language and also otherwise denied by conceptualism that was devised by the same authors inaccurately attributed to many linguists, including me. Apparently, many sociolinguists also deny this: it is incompatible with structural and behavioral approaches to language. To my surprise, it is somewhat rejected by current studies on language by leading neuroscientists, especially Terrence Deacon, and recent work that has been received favorably by eminent biologists, again, to my surprise. The approach therefore seems controversial, but I think appearances can be deceiving. A more careful look will show that I believe the basic assumptions are tacitly adopted even by those who strongly reject them, and in fact, they should be adopted even for reasons of coherence.
I am going to leave aside this interesting topic of contemporary intellectual history and simply assume that language can be studied as part of the world; In other words, I will continue with the biolinguistic approach that took shape about half a century ago, heavily influenced by ethology, comparative psychology, and pursued intensively along quite a few different paths, including much of the work that purports to reject this approach. Well, assuming we move on to some things that should be obvious. It can hardly be denied that some internal state is responsible for the fact that I speak and understand some variety of what is loosely called English, but I do not say Hindi or Swahili when borrowing and, in fact, I adapt a traditional term, we can call this Whatever it is, it is internal to me, a state of the human faculty of language, primarily a state of the brain.
We can call each of those states an internalized language in the technical literature, often called I-language for simplicity. I'll call it that. It should also be indisputable that the language faculty has an initial state that is part of our biological endowment, allowing a certain range of options, the languages ​​achievable. The faculty of language is then a special property that allows my granddaughter, but not her kitten or chimpanzee, to attain a specific language by exposing herself to the appropriate data, data that her mind in some obscure way is able to extract from the hustle and bustle. . , confusion and interpreting as linguistic experience, is not an easy task, no one knows how it is done but obviously it is.
More precisely, each baby acquires a complex of such states; That's a complication I'll leave aside. The expectation that language is like everything else in the organic world and therefore based on a genetically determined initial state, which distinguishes, say, my granddaughter from her pets, that assumption has been called innate hypothesis. And there is a substantial literature debating the validity of the innateness hypothesis. Literature has a curious character: there are many condemnations of the hypothesis, but it is never formulated and no one defends it. Its so-called defenders, of which I am one, have no idea what the hypothesis is.
Everyone has some hypothesis about innateness, at least when it comes to language, anyone who is interested in the difference between a baby and, say, her pet. Furthermore, the invented term innate hypothesis is completely meaningless, there is no specific innate hypothesis, but rather there are several hypotheses about what the genetically determined initial state could be; These hypotheses, of course, are constantly changing as more is learned, so everything should be obvious. The confusion over these matters has reached such extreme levels that it is becoming difficult to even unravel, but I will leave all that aside. The biolinguistic approach considers mental faculties as states of the organism.
In particular, the internal languages, the languages ​​that are states of the Faculty of Languages. I will focus on language, but most of what follows should also apply to other cognitive faculties and, indeed, to much simpler organisms, say, for example, the communication or navigation of a bee. Well, when we take this approach, several questions arise at once. The central point is to determine the nature of the initial and achieved states. And although the issue again seems to be controversial, I know of no serious alternative to the thesis that these are substantially computational states, whether we have in mind insect navigation or what you and I are doing now.
Again, this is considered controversial, but since there are no alternative ideas, I don't understand why. For humans, it is not considered controversial, for example, the navigation of insects, but questions about it. Brain research in these terms is sometimes called psychological, and is contrasted with the term research in terms of cells, chemical processes, electrical activity, etc., which is called physiological. These are again terms of convenience, they have no defined boundaries, chemistry and physics were distinguished quite similarly, not long ago. So, the formulas involving complex molecules we now study in school. Recently they were considered to be, and I quote simply classic narrative symbols, that summarize the observed course of a reaction.
The ultimate nature of molecular groupings was considered unsolvable, and the actual arrangement of atoms within a molecule, if that means anything, was never to be read into formulas. I am quoting a standard history of chemistry. Keiko Leong, whose structural chemistry paved the way for an eventual unification of chemistry and physics, doubted that the absolute constitution of organic molecules could ever be given, their own models, their valence analysis, etc., would only have one instrumental interpretation as calculating devices. . Leading scientists also understood large parts of physics in the same way, including the molecular theory of gases and even Bohr's atomic model.
In fact, just a few years before physics and chemistry came together in the explanation of chemical bonding, Linus Pauling, the first American chemist to win the Nobel Prize, dismissed comments on the real nature of chemical bonds as, in his terms, metaphysical nonsense, this was nothing but nonsense. than a very crude method of representing certain known facts about chemical reactions, a mode of representation only, simply a calculating device. The rejection of this skepticism by some prominent scientists, whose views were incidentally condemned as conceptual absurdity, paved the way for eventual unification. These are fairly recent debates, we're talking about the 1920s.
I think these fairly recent debates in the hard sciences have considerable relevance to current controversies over computational theories of cognitive ability, from insects to humans. Important topic, which was discussed a bit elsewhere and I think deserves much more attention than it received. Well, with the biolinguistic approach in place, this framework established, we want to discover the relationship between psychological states and the world as described in other terms. We want to know how computational states are related to neurophysiological states; they represent brain states and terminology. We also want to discover how these mental states relate to the external world of the organism, as for example when the movements and noises produced by a forager direct others to a distant flower, or when I talk about a recent trip to India or when Saying that I recently read Darwin's Man's Dissent, referring to a book, this is all called intentionality and philosophical jargon.
General questions were raised prominently at the end of the decade of the brain, which ended the last millennium. To commemorate the occasion, the American Academy of Arts and Sciences published a volume in late 2000 summarizing the current state of understanding in these areas. The guiding theme of the volume was formulated by the distinguished neuroscientist Vernon MountCastle in his introduction to the collection. It is, in his words, the thesis that mental things, indeed minds, are emergent properties of the brain, although these emergencies are not considered irreducible but are produced by principles that control the interactions between lower-level events, principles that we still don't understand.
That same thesis has been presented in recent years citing as a surprising hypothesis of the new biology, a radically new idea in the philosophy of mind, the bold assertion that mental phenomena are entirely natural and caused by neurophysiological activities of the brain. open the door to new and promising research, etc. The contributors to the American Academy volume were for the most part quite optimistic about the prospects for bridging the remaining gaps between psychological and physiological explanations. Mountcastle's phrase: "We still don't understand it." It reflects that optimism suggests that we will soon understand. Wilson wrote that "researchers now speak with confidence" of a forthcoming solution to the brain-mind problem.
We can usefully recall a similar optimism shortly before the unification of chemistry and physics. Thus, in 1929, Bertrand Russell, who knew the sciences well, wrote that chemical laws cannot currently be reduced to physical laws. As theMountcastle's word still expresses the expectation that reduction should take place in the course of scientific progress, perhaps soon. But in the case of physics and chemistry it never took place, and what happened was something different and totally unexpected. , the unification of virtually unchanged chemistry with radically revised physics. And there is no need to underline the fact that the state of understanding and achievement in these areas, 50 or 80 years ago, was far beyond anything that can be claimed for the brain and cognition. current sciences, which should make us reflect.
The American Academy volume discusses many important discoveries, but the main thesis should raise our skepticism. And not only for the reason I just mentioned, another reason is that the thesis is not new at all. In fact, it was formulated in practically the same words two centuries ago, at the end of the 18th century, by the eminent chemist Joseph Priestley. He wrote that the properties of the mind arise from the organization of the nervous system itself, and that those properties called mental are the result of the organic structure of the brain, in the same way that matter has powers of attraction and repulsion that act at a distance contrary to the mind. the founding principles of the modern scientific revolution from Galileo to Newton and beyond.
Half a century before Priestley, David Hume had casually described thought as a small agitation of the brain, and soon afterward the French philosopher, the Gabonese physician, wrote that the brain must be considered a special organ designed to produce thought, as is the stomach. and the intestines. designed to operate digestion, the liver, filter bile and various glands to prepare salivary juices. Lemetri had similar proposals, they were suppressed at the time but are well known today. A century later, Darwin rhetorically asked why thought, being a secretion of the brain, should be considered more wonderful than gravity, which is a property of matter.
In reality, these and many other conceptions developed from an investigation into what was called thinking matter, developed in part from what historians of philosophy sometimes call Locke's suggestion, John Locke's suggestion. That is to say, his observation in his words that God could have chosen to add to matter a faculty of thought, just as he attached effects to motion that we can in no way conceive of a motion capable of producing the theological apparatus, may well have been for us . self defense. So suggests Locke's correspondent. By the end of the 18th century, the thesis was widely considered inescapable.
Newton had demonstrated, to his considerable dismay, that matter does not exist, in the sense of the Galilean revolution and of the scientists of his time and his own sense. This being so, the mind-body problem could no longer even be formulated, at least in something resembling the classical form. The current formulation seems, at best, to restate the problem of unifying psychological and physiological approaches, and to do so with very misleading terminology. There was no mind-body problem, any more than there was a physical chemistry problem in the 1920s. The unification of Newton's discoveries actually left no coherent alternative to the conclusion reached by Hume Priestley. and others and that today is rediscovered practically in the same terms.
But with the problem of emergence, as unresolved as it was two centuries ago and including the question of whether this notion, with its reductionist connotations, is even the right notion, perhaps it is the wrong notion, as

turn

ed out to be the case with the chemistry and physics. . The traditional mind-body problem is often derided as a ghost-in-the-machine problem, but that is a mistake. Newton exorcised the machine, leaving the ghost completely intact. Very recently, two physicists, Paul Davies and John Griffin, made a similar observation in concluding a book of theirs, called Matter Myth. They write that during the triumphant phase of materialism and mechanism in the 1930s, which Gilbert Ryle ridiculed, mind-body dualism in a concise reference to the mind part, like the ghost in the machine.
But even when he coined that pithy expression in the 1930s, the new physics was at work undermining the materialist worldview on which Ryle's philosophy was based. As the late 20th century continues, we can see that Ryle was right to dismiss the notion of the ghost in the machine, not because there is no ghost but because there is no machine. His point is correct, but the timing is off by at least two centuries, actually three. Although it took some time for Newton to demolish what at the time was called mechanical philosophy, the belief that the world is a machine, it took a little while for that to enter scientific common sense.
Newton himself was well aware of the conclusion and was not at all satisfied with it. He considered his own conclusion to be an absurdity that no serious person could consider, and he kept cutting it down to the end of his life as did prominent scientists of his time, and much later always in vain. Over time, it came to be recognized that Newton had not only effectively destroyed the entire materialist and physicalist conception of the universe, but had also undermined the standards of intelligibility on which the early Scientific Revolution was based. The result is familiar at least in the history of science;
It was described very well in the classic 19th century history of materialism by Friedrich Longa. He noted that scientists have become accustomed to the abstract notion of forces, or rather to a notion that floats in a mystical darkness between abstraction and concrete understanding. A

turn

ing point in the history of materialism that distances the surviving remains of the doctrine from the ideas and concerns of the genuine materialists of the 17th century, and deprives them of any meaning. This too is now practically a truism, at least among historians of science. One of the founders of the modern discipline Alexandre Corre wrote 40 years ago that a purely materialist or mechanistic physics is impossible, and we simply have to accept that the world is made up of entities and processes that we cannot grasp intuitively.
The problems of emergence and unification take on a completely new form in the post-Newtonian era. A form that is also unstable and changes as science adapts to new absurdities as the founding figures of the Scientific Revolution, including Newton, would have considered them. And I know of no reason to suppose that the process has come to an end. It is worth noting that the only part of our knowledge, or what we consider knowledge, that we can claim to have a lot of confidence in is our mental world, that is, the world of our experience. As reflective beings, humans try in various ways to make sense of this experience.
Some of this effort is sometimes called popular science when it is carried out in a more systematic, careful, and controlled way (today we call it science). A standard conclusion of contemporary science is that each organism, humans in particular, reflexively develop what ecologists call and reveal the particular way of constructing an interpretive experience given sensory information, is very different for us and for the bees, for example. Furthermore, there is no great chain of beings in fundamental aspects; Insects have richer experience and more sophisticated ways of dealing with it for action, of using it for action, than humans.
Among other standard conclusions of modern science are those that Priestley and many others drew from the conclusions they drew centuries ago about thinking matter, reiterated at the end of the decade of the brain, just two years ago, which without notable changes or such There is a lot of awareness that this is a revival, not an innovation, and that this is a revival of something that two centuries ago was considered an inevitable truism for very good reasons. In the absence of a positive determinant explanation of the non-mental part of the world of the world, what is sometimes called the physical world.
Speaking of the difficult part of the mind-body problem, in recent years consciousness has been conventionally considered. Such conversations are misleading at best, if they have any meaning at all. It may not be, sometimes the problem is not stated very clearly, that is, it is posed in terms of questions to which we cannot even give wrong answers. So, for example, there is no sensible answer to the question: what is it like to be me? Or what it's like to be a bat, and quoting Thomas Nagle's famous article. There are no bad answers to that, no good answers, none at all.
Formal semantic investigations often take the meaning of a question to be the set of propositions that are answers to it. And if this is at least a condition on meaning, then it follows that if there are no sensible answers, the question has no meaning. Even when legitimate questions are raised, as far as I see we have no good reason, except to assume that they are intrinsically more difficult than many other problems, say the problems posed for our understanding by quantum mechanics or cosmological theories of an infinite number of universes. Or, indeed, by the properties of motion.
We have no reason, as far as I know, to question the views of Newton, David Hume, and other not inconsiderable figures who in various ways came to Locke's conclusion that motion has effects of which we can in no way conceive motion to be capable. to produce. Even before Newton's perplexity about motion was profound, his predecessors William Petting described elastic or elastic motion as the hard rock in philosophy, philosophy meaning what we call science. The darkness was so great that Robert Boyle felt that he demonstrated the existence of an intelligent author or eliminator of things. Even the skeptical Newtonian Voltaire considered that the impenetrable mysteries of motion demonstrated that there must be a God who gave motion to matter in a way similar to Locke's suggestion.
The difficult problem cannot be said to have been solved, it was simply abandoned in the course of a significant revision of the enterprise of science, that is, the recognition that in some fundamental sense the world is simply unintelligible to us and that we have to reduce it. our look at the search for intelligible theories, that is something very different. And even that goal has been strongly questioned by prominent physicists. For example, in the criticism a century ago of atomic theory or even the idea that physics should go beyond the establishment of quantitative relationships between observable phenomena.
The importance of that change should not be underestimated; It was recognized very early, for example, by David Hume, who wrote that, in his words, "Newton's discoveries reveal the darkness "in which the ultimate secrets of nature will always remain." These mysteries of nature as Hume called them , referring to the phenomena of movement, will remain outside our cognitive reach, perhaps we could speculate, it did not do so for reasons that have their roots in the biological endowment of the curious creature, which is the only one capable of even contemplating these questions. , I spoke a little about these topics 35 years ago, and what has happened since then, including incidentally my own belated self-education, inclines me to believe that what I said then, should be reiterated with much more force and depth and with. much more explicit connections with contemporary discussions of the problems of language and mind.
Well, let us return to the narrower question of the emergence of mental aspects of the world, or perhaps to the development of an explanation of the non-mental world that can be unified. . with them if the physics and chemistry model turns out to be accurate. The magnitude of the gap that persists and the very dubious reasons for general optimism about its bridging are revealed very clearly in the American Academy symposium that reviewed the state of understanding at the end of the millennium. A leading vision specialist who was on the optimistic end of the spectrum, yet reminded the reader that how the brain combines the responses of specialized cells to indicate a continuous vertical line is a mystery that neurology has not yet solved.
Or even how a line differs from others or from the visual environment. Sameera Nzeki, AAA magazine of the American Association for the Advancement of Science, scientific magazine. A year ago they dedicated an issue to neuroscience. The summary article was co-authored by Nobel Laureate Eric Kandel and was subtitled Breaking Scientific Barriers to the Study of the Brain and Mind. The article covers many interesting topics, but ends with the conclusion that the neuroscience of higher cognitive processes is just beginning, surely starting from a plane higher than that built by Descartes, who was, in many ways, the founder of modern neuroscience. , but nevertheless, it is still beginning.
The fundamental questions remainbeing beyond even dreams of resolution. This includes those that were traditionally considered central to the theory of mind, such as choosing an action or even thinking about performing it. There has been very valuable work on the more specific questions. For example, how an organism executes an integrated motor action plan, say how a cockroach walks or how a person reaches for a cup on the table. Well, let us return to the more concrete question of the emergence of the mental aspects of the world or perhaps the development of an explanation of the non-mental world that can be unified with them, if the physical-chemical model turns out to be accurate.
The magnitude of the gap that still remains, and the very dubious reasons for general optimism about bridging it, are revealed very clearly in the American Academy symposium that reviewed the state of understanding at the end of the millennium. A leading vision specialist who was on the optimistic end of the spectrum, yet reminded the reader that how the brain combines the responses of specialized cells to indicate a continuous vertical line is a mystery that neurology has not yet solved, nor even for that. No matter how one line differs from others or from the visual environment, Sameera Nzeki, the American Association for the Advancement of Science's AAA scientific journal, dedicated an issue a year ago to neuroscience.
The summary article was co-authored by Nobel Laureate Eric Kandel and was subtitled Breaking Scientific Barriers to the Study of the Brain and Mind. The article covers many interesting topics, but ends with the conclusion that the neuroscience of higher cognitive processes is just beginning, it is surely starting from a plane higher than that built by Descartes, who was in many ways the founder of modern neuroscience, but however it is still beginning. Fundamental questions will remain beyond even dreams of resolution, including those traditionally considered the core of theory of mind. Such as choosing an action or even thinking about doing it.
There has been very valuable work on the narrower questions. For example, how an organism executes an integrated motor action plan. Tell how a cockroach walks or how a person reaches for a cup on the table. But no one even asks the question why the man or the cockroach executes one plan instead of another; That question arises only in the case of the simplest organisms, the unicellular ones. In fact, the same is true even with visual perception, which is often considered a passive process. A couple of years ago, two cognitive neuroscientists, one of them a colleague of mine, published a review of research on a problem that, in their own words, Helmholtz posed in 1850. "Even without moving our eyes we can focus our attention" on different objects at will", resulting in very different perceptual experiences "of the same visual field." There has been interesting work on that, but the phrase at will points to an area that is beyond If serious empirical research is done, it follows being as great a mystery as it was to Newton at the end of his life, when he was still searching for what he called a subtle spirit that lies hidden in all bodies and that could, without being absurd, explain their properties of attraction and repulsion, the nature and effects of the sensation of light and the way in which the limbs of animal bodies move under the command of the will, all these are comparable mysteries to Newton, who perhaps even beyond our comprehension, thought like the principles of movement and classics. problems of the theory of mind at least since Descartes, who by the way also considered them as possibly beyond human understanding.
Even if we limit ourselves to studying the mechanisms, the gaps are quite substantial. One of the leading cognitive neuroscientists, Randy Gallistel, recently pointed out that we clearly do not understand how the nervous system calculates, or even the foundations of its ability to calculate, even for the small set of arithmetic and logical operations that are fundamental to any calculation. He happens to be talking about insects, but he obviously goes further. In another area, one of the founders of contemporary cognitive neuroscience, Hans-Lukas Teuber. He introduced an important review of perception and neurophysiology, writing that it may seem strange to begin with the claim that there is no adequate definition of perception and end with the admission that we lack a neurophysiological theory.
Although this was the most that could be said. Now, it is true that this was 40 years ago and there were dramatic discoveries, right at the time he was writing and since then. But I suspect that Tuber, now deceased, would have expressed the same judgment today. Tubeber also described a standard way of moving toward the unification problem. He explained that his purpose in reviewing perceptual phenomena and offering a speculative psychological explanation of them was because this may suggest directions in which the search for the neural bases of perception should proceed, that is, by clarifying the assumptions that those neural bases must satisfy. .
Now that is a classical approach, along with restricting the scientific enterprise to more modest goals, namely the intelligibility of theories rather than the world. Another consequence of Newton's demolition of the hopes of the Galilean revolution, due to a mechanical conception of the world. Another was the recognition that scientific research will have to be local in its expectations. General unification may take place, but perhaps over the long term and in ways that cannot be anticipated. The 18th-century English chemist Joseph Black set the tone for later scientific work by recommending that chemical affinity be received as a first principle, which we can no more explain than Newton could explain gravitation, but postpone explaining the laws of affinity until We have established a body of doctrine such as that which Newton established on the laws of gravitation.
And, in fact, chemistry followed that path, separating itself more and more from physics. Physics followed Newton's warning that nature would be comfortable with itself and very simple, observing general principles of attraction and repulsion that relate the elementary particles of which all matter is made up in more or less the same way as they could be Build different buildings from the same bricks. Therefore, the goal was to understand how to quantify, to reduce all of nature to simple laws as Newton did, for example, with astronomy. I happen to be quoting Arnold Thackeray in his history of the Newtonian theory of matter and the development of chemistry.
He goes on to say, "This was the compelling, tantalizing, even almost fascinating goal of much work that pursued the thoroughly Newtonian and reductionist task of discovering "the general mathematical laws" that governed all chemical behavior." A different chemical tradition followed the path laid out by Joseph Black. Lavoisier Wallace founded modern chemistry, which attempted to remain neutral probably to avoid controversy. But his own work helped found the separate chemical pathway that Dalton abandoned and completely abandoned the corpuscular theories of Boyle and Newton. He adopted the radically different view that matter could exist in heterogeneous forms with varying principles.
His approach, Thackeray writes, was chemically successful and therefore enjoyed the homage of history, unlike the philosophically more coherent though less successful reductionist schemes of the Newtonians. By the end of the 19th century, the fields of interest of chemists and physicists had become quite distinct, including a standard history of chemistry. Chemistry dealt with a world composed of about 90 material elements, with many and varied principles and properties, while physicists dealt with a more nebulous mathematical world of energy and electromagnetic waves that were perceived in light, radiant heat, electricity, magnetism. , later radio waves and x-rays: matter for chemists was discrete and discontinuous, energy for physicists was continuous and the gap seemed unbridgeable.
Meanwhile, chemists developed a rich body of doctrine achieving the triumphs of chemistry isolated from the newly emerging science of physics, again Thackeray. As I mentioned, the isolation, which recently ended in a completely unexpected way, not by a reduction, but by the unification of a radically revised physics with the bodies of doctrine that chemistry had accumulated and which had in fact provided important guidelines for the reconstruction of physics, basically towards this point about perception, and that has happened often in the history of science, and we cannot know if something similar might be necessary to unify the study of the brain and mind, if we assume that it is a task that is within our cognitive reach, and that we do not know either.
Well, I have already suggested and will repeat that there are interesting and important parallels between debates about the reality of chemistry up to the unification that occurred just 65 years ago. And current debates in the philosophy of mind about the reality of the constructions of psychological approaches. It is now understood that the old debates on chemistry and physics were totally useless and based on serious misunderstandings. We simply have no other understanding of reality than what our best explanatory theories can provide. If they turn out to be computational theories, that's fine, that's reality. My own view, I have argued elsewhere, is that the current debates, very lively at the moment, are also largely unhelpful and, essentially, for the same reasons, this includes central issues in the philosophy of mind and theoretical cognitive science, which those of you in the disciplines will recognize or even in the general literature.
Well, considerations of the kind I have been reviewing were in the background of the so-called cognitive revolution of the 1950s, at least for some of the participants. Although it was unknown at the time, in many ways the change in perspective brought about by the cognitive revolution actually recapitulated the first cognitive revolution of the 17th century, including the focus on vision and language, in the latter case adopting the biolinguistic approach that is diverting the focus of attention from phenomena such as behavior and its products, the texts say. Moving on from that to internal mechanisms that intervene in the production of the phenomenon.
Now, that is a change, but it is actually a change that occurred in the 17th century. Let's note again that there was a regression for a long time. Let us observe again that this change still leaves us very far from the problems of action; that is a very different matter. I myself have often cited, through Hume, Humboldt's aphorism that the central problem of the study of language is the infinite use of finite means, which is why it was one of the main concerns of Cartesian philosophy before him, and a problem that couldn't really be solved. He stated very clearly at least until the middle of the 20th century when the concepts of recursive generative procedures were completely clarified.
These procedures constitute finite means that are used infinitely, but it is important to note that I don't think I have emphasized this enough. It is important to be aware that despite great progress in the understanding of media, media that are used for infinite use, the question of how they are used is barely addressed and it was that question that was fundamental for Descartes Humboldt. and other early modern figures, and again those questions are not even addressed, for example, for insects, let alone humans. It is reasonably clear that the human capacity for language is what is called a species property, which is biologically isolated in essential respects and almost uniform throughout the species.
This actually seems less surprising today than it did recently, in light of very recent discoveries about very limited genetic variation among humans compared to other primates, suggesting that we all descend from a very small reproductive group, perhaps 100,000. . years ago. So humans are basically identical from the point of view of, say, an outside biologist looking at us. The biolinguistic approach adopted from the beginning, what has been called citation, the recently published encyclopedia of cognitive neuroscience, what is called the norm today in neuroscience, the modular view of learning, is the conclusion that in all animals Learning is based on specialized mechanisms, instincts to learn in specific ways, again quotes Randy Gallistel: "These organs within the brain," as he calls them, "perform specific types of computation" according to a specific design. "Apart from extremely hostile environments," organs change state under the triggering and shaping effect of external factors, "they do so more or less reflexively" and according to an internal design.
That is the learning process, although growth might be a more appropriate term for it.avoid misleading connotations of the term learning. The language organ, the Language Faculty, conforms to that pattern, the normal pattern. According to the best theories we have each achievable state of the system, each language, is a computational system that determines, generates in the technical sense. There are infinite expressions, each of these expressions is a store of information about sound and meaning that is accessed by interpretation systems. The properties of language I result from the interaction of several factors, one of them is the individual experience that selects among the options that the initial state allows.
A second factor is the initial state itself, which is a product of evolution. And a third factor is the general properties of organic systems, in this case the computational systems that incorporate, it is reasonable to expect principles of efficient computing. The big picture crucially involving the third factor is familiar from the study of organic systems; Overall, this classic work by Darcy Thompson and Alan Turing on organic form and morphogenesis is an illustration of the current topic and contemporary biology. One example, a current example that might be suggestive of the current connection, in the current context, is the recent work of mathematical biologists Christopher Juniac in Maryland, who has been exploring the idea that minimization of what he calls length of the cable, as in microchip design, should produce what he calls the best of all possible brains.
And he has tried to explain in these terms the neuroanatomy of nematodes, one of the simplest and best-studied organisms, and also several ubiquitous properties of the nervous system, such as the fact that the brain is as far forward as possible on the brain axis. body. He tries to show that this is just the property of an efficient calculation based on minimizing the length of the cable. Well, one can trace interest in this third factor, the general properties of organisms, to a Galilean intuition, namely, Galileo's concept that, in his words, "nature is perfect." From the tides to the flight of birds, and it is the scientist's task to discover in what sense this is true.
Newton's confidence that nature must be very simple, which he cited, reflects the same intuition. Obscure as it may be, insight into what Ernest Pickle called nature's drive toward the beautiful has been a guiding theme of modern science since its intelligent modern origins, and with a Galilean revolution, perhaps even its defining characteristic. It's hard to say exactly what it is, but there's no doubt it's a guiding intuition. However, biologists have tended to think quite differently about the objects of their research. Very commonly, they adopt what Francois Jacob, the Nobel Prize winner, his image of nature as what he called a tinkerer, doing the best he can with the materials at hand, often quite bad work, like intelligence. human seems to be interested in proving about himself.
The well-known contemporary biologist Gabriel Dover, a British geneticist, concludes in a recent book that biology is a strange and confusing subject and that perfection is the last word that could be used to describe how organisms work, particularly for anything produced by natural selection. . Although, of course, it is produced only in part by natural selection, as he emphasizes, and as every biologist knows, and to an extent that cannot be quantified with the tools available. Well, we simply don't know which of these conflicting intuitions is more accurate, the Galilean intuition or, say, Jacob's intuition. And we won't know until we know the answer, and those answers seem very remote.
The same author Gabriel Dover writes that "We are nowhere near alleviating our deepest ignorance" about the biological world around us. Now, he continues to reserve his harshest words for those who seek to give scientific respectability to complex behavioral and human phenomena. We cannot even begin to investigate seriously. He calls this a sign of intellectual laziness at best, and blatant ignorance at worst, when faced with issues of enormous complexity that far exceed the scope of contemporary science. I will give some examples, but for charity, ignore them The long-term goal of investigating the third factor, which is the role of the general properties of organisms in determining the language faculty, and the states that internal languages ​​can achieve. , that goal was actually formulated in the early days of the linguistic turn, but it was set aside as unfeasible and the focus was on the first two factors, experience and initial state.
In technical terminology, this is called descriptive and explanatory adequacy problems, and the latter is the question of how the initial state comes into play. determining the transition to the final state, the achieved state. The first attempts, 50 years ago, to replace traditional or structuralist explanations of language with systems of generative rules, very quickly revealed that very little was known about the sound, meaning and structure of language and that, inadvertently, swept huge problems under the rug. more like the days when bodies were supposed to fall to their natural place. As has often been the case, one of the most difficult steps in the development of science is the first: being puzzled by what seems so natural and obvious.
Getting a realistic idea of ​​what had been overlooked was an enormous task in itself, even more so in light of the recognition that the apparent complexity and diversity of languages ​​that was discovered very early had to be simply an illusion. The reason for this conclusion is common in biology, that is, as occurs with other organs in the body. Experience can only play a very limited role in determining the state at which, in this case, the acquired language, is achieved. Thus, even a young child has mastered a rich and highly articulated system of sounds, meanings, and structural properties. that goes far beyond any available evidence and is shared with others who have different but also very restricted experiences.
Therefore, it must be true that the initial state plays an overwhelming role in determining the language that the child achieves in all its aspects. Surely experience has a triggering and modeling role, as in the case of other organs, but it has to be limited. Therefore, there is no reason to suppose that language and other higher mental faculties radically depart from everything known in the biological world. The task was to show that the apparent richness, complexity and diversity is actually an illusion, that all languages ​​are tested in the same mold and that experience only serves to establish options within a fixed system of principles, all determined by the state initial.
As in the case of other biological systems. Well, much of the research of the last 40 years in these areas has been driven by a kind of tension between descriptive and explanatory adequacy, that is, the tension between the search for true theories of languages ​​I, the state achieved, on the one hand, and a true theory of the invariant initial state of language or again on the other. The invariant initial state is the subject of what has been called universal grammar, which consists of adapting a traditional notion to a fairly new context. The search for a descriptive adequacy as a true theory of Hungarian, leading to complex and intricate theories of particular constructions and particular languages ​​that differ from each other.
On the contrary, the search for explanatory adequacy seeks to find the common ground from which existing and all other possible languages ​​arise, given that the structured data are experienced, by the operations of the initial States, again in some unknown way. The first proposals of the fifties suggested that the initial state, the theme of universal grammar. That the initial state provides a kind of format for rural systems in their organization, and a procedure for selecting one instance of the format over another, in terms of its success in capturing authentic linguistic generalizations and empirical notions that also incorporate a kind of format.
Theoretical internal version of the standard considerations about the best theory. At first, the rules themselves were adaptations of informal traditional notions, which had proven completely inadequate when subjected to close examination. That meant, for example, rules for forming relative clauses in Hungarian or the passive, or in Japanese, in the positive, in Romance languages, etc. The general approach offered a kind of solution to the central problem of the study of language, sometimes called in the literature, the logical problem of language acquisition, that is, how the initial state maps the constructed experience to the final state. But, as highlighted, this solution is only valid in principle because in practice the conception was unfeasible due to astronomical computational demands.
Well, for about 40 years there has been an attempt to reduce the scale of the problem by searching for valid general principles that could be abstracted from particular grammars and attribute the meaning of the universal grammar to the initial state of the language faculty, leaving residues that could be more manageable. . In reality, some of those proposals were proposals that were then being explored. I reviewed it in lectures here 35 years ago, after that time, by then there was some progress, then considerable progress, but it still left the tension unresolved. That because the big picture was somehow fundamentally flawed, there was no real solution, no feasible solution to the logical problem of language acquisition.
Well, a possible resolution of the tension was reached after great effort about 20 years ago, with the crystallization of an image of the language that marked a very sharp break with a long and rich tradition dating back to classical India and Greece. It is sometimes called the principles and parameters approach which completely dispenses with the central notions of traditional grammatical notions, such as the grammatical construction or the grammatical rule. From this point of view, categories such as a relative clause or a passive construction are understood to be quite real, but only as taxonomic artifacts. So, for example, aquatic organisms which would include, for example, dolphins, trout, eels and some bacteria, are a category, but not a biological category.
The phenomena, the phenomenal properties of these artifacts result from the interaction of the invariant principles of the initial State, the Faculty of language, with parameters that are a finite number of parameters fixed in one way or another. Indeed, it would follow that there are only a finite number of possible human languages, apart from idiosyncrasies and choice of lexical items, and even these are markedly limited. This means that the problem of infeasible search is eliminated; It is an important conclusion if it is correct. The conception has now been applied to typologically different languages ​​of almost all known types, has led to many discoveries and a host of new questions never before contemplated and sometimes suggestive answers.
This principles and parameters approach is an approach, not a theory. Within the general approach, there are many diverse theories; In fact, there is a very good introduction to the topic that Mark Baker has just published, called Language Atoms; he himself has made important contributions to the approach, in which he has been working mainly in languages ​​that seem to be at opposite ends of the spectrum of typological possibilities, choosing this on purpose, of course. Mohawk and English is the pair he studied most intensely, trying to show that, although they are as phenomenally different as two languages ​​can be, they are actually virtually identical, save for very small changes in a few parameters, so that's a Martian observer. who views humans as we view other organisms, would conclude that they are essentially identical dialectal variants of the same language.
Extensive work of a similar nature has been carried out around the world with quite revealing results. A major EU-funded program studies the large number of languages ​​in Europe, a large number of them misleadingly named German and Italian, etc., although they are completely different languages. Including by those in those characterizations and it is being done elsewhere as well. I do not mean to suggest that an approach has been established that is very far from being true, but it has been very successful as a research program as a stimulus for empirical and theoretical research advancing towards the goals of descriptive and explanatory adequacy, it has far surpassed all which precedes, not only in the in-depth analysis of particular languages, but also in the range of typologically different languages ​​that have been investigated.
And also new areas of linguistic structure that had barely been explored before. Related fields, such as the study of language acquisition, have also been completely revitalized within a frameworksimilar and now totally resemble what it was 20 or 30 years ago. There are some important steps towards convergence, although it will certainly be a long and difficult road, even if the approach turns out to be more or less on the right track, we are far from having a clear idea of ​​what the principles and parameters really are. . But I think it's fair to say that the study of language in the last 20 years has moved to a completely new plane.
Well, I want to pick up on these topics tomorrow and then move on to the topics, particularly the role of the third fact: the general properties of organisms and say something about that. And then to move on to the work called the questions of intentionality, that is the question of how language now understood within the linguistic framework relates to the rest of the world. (Audience applauds) - Professor Chomsky agreed to answer a couple of questions that were submitted by the audience. Since these questions were presented at the beginning of the conference, they may not have been part of what was covered today;
It could be a preview of what will be discussed tomorrow. "Do you still believe that the human language faculty "suddenly evolved in a sudden evolutionary leap?" "Doesn't this conflict with the general finding "that evolution is gradual?" - This idea sometimes called monstrous mutation was invented, as far as I know, by Elizabeth Bates, and has been repeated by many other people, to what has been attributed to me for some reason I don't have the slightest idea why, I have never said a word about it and I have no idea if it is true nor does anyone know anything about these topics in general why it was invented, for what. was attributed to me, you can try to find out.
The fact is that no one knows anything about these things, even for much simpler questions. So, for example, a well-known fact about biological organisms is all you know. At the bacterial level, they all seem to have essentially the same body type, the same body shapes, and now you understand how that works. Certain regulatory genes have been discovered, master genes that create the forums and just spread throughout the whole. organic world of animals. Well, very recent work, actually a couple of nature papers just a few weeks ago, show experimental laboratory work showing how this could have happened in a monstrous mutation.
Well, okay, maybe maybe not. There are recent scientific papers on the evolution of the human brain published a couple of weeks ago, which actually suggest that a small mutation occurred that suddenly led to an explosion of brain cells. Maybe, maybe not, you know, I have nothing to say on these matters, I just repeat what is in the technical literature. But why this conception was attributed to me is a total mystery as far as I know. - I hope this is not the same as what you like, so you still ask again. "Do you still consider language as an autonomous module" that is independent of other cognitive systems? "Wouldn't it be easy to assume that linguistic ability "relies on the same cognitive mechanisms" as other forms of human cognition?" - Well, I assume that human language, I mean, is like asking if the visual system is different from other systems in the body.
Of course it is not, not to mention the immune system. Of course, it is no different from the rest of the body, since you cannot eliminate the immune system and abandon the rest of the body; I mean, it's impossible. Each subsystem of the body that is isolated for a particular investigation is isolated because it has some kind of integrated character and you think you can learn something about it. It doesn't mean it's in a little box that you can take out, right? So the immune system is in every cell, but it is still a system. And presumably the same thing happens with the faculty of language.
I mean, it brings together a lot of different components that come from everywhere, for example, the faculty of language, what you and I are doing now, makes essential use of the delicate architecture of the bones in the middle of the year, okay. They are fantastic and wonderfully designed to understand language. Of course, they evolved for reasons that have absolutely nothing to do with language, they evolved from the fact that apparently the skulls of the first mammals about 160 million years ago were getting bigger, and as they got bigger , the murmurs in the jaws of reptiles began to migrate and apparently for mechanical reasons ended up in the middle of the ear, which is excellent for language.
And the same thing is true, I mean the computational principles that are at the core of language, and we could discover that those are the same computational principles that insects use for navigation, if we made that fascinating discovery, it would mean that it is one of the things that were recruited to form the language faculty. The idea that the language faculty is isolated from other cognitive systems simply does not make any sense, it cannot be. I mean that we use the faculty of language to express our thoughts, to think for ourselves, to communicate, to have and communicate attitudes, to perceive the world.
Of course, it is integrated with the whole collection of cognitive faculties and other faculties. Again, the interesting question that should be asked about that question is why do you say, "I still believe," because no one could believe this. There is a huge mythology about these things that has developed. It is a kind of characteristic of studies of human higher mental faculties. They persecute each other to a level of irrationality that is difficult to confront. In fact, centuries ago, the hard sciences were treated similarly, but we should be able to escape that. - Do you have your opinion on the poverty of the stimulus change in light of these studies? - There are no such studies. (Audience laughs) All studies show what everyone already knows: that the amount of data available for language acquisition is extremely small.
This notion of stimulus poverty is a bit like what I call the innateness hypothesis. Everyone believes in poverty of stimuli, not only for language but for everything. There was once a certain embryo that became my granddaughter. Why didn't he turn into a worm, say? I mean, is it because of the nutritional value? What different nutritional contribution to the cell has turned it into a worm? Well, you know, no one takes the questions seriously, they're such idiots. What every scientist automatically assumes is what is called poverty of the stimulus, in the case of language. It is considered controversial in the case of human mental faculties, but is taken for granted without even discussion in the rest of biology because it is so obvious.
I mean that it is not possible that what you and I are doing now is based on early or late contributions to the environment because there are none, practically none. In fact, it is easy to demonstrate that the things that even a small child does are at a statistical level that barely reaches the point of noise in real speech. The point is quite obvious, there is nothing to argue about it. Everyone agrees that there is poverty of the stimulus, the only question is well what is the initial state that in fact first structures the data as experience, which is not a trivial task, and then takes that experience and almost instantaneously and reflexively as Gallistel points out. builds an achieved state, that's the problem.
Anyone have some thoughts on this, be my guest, it's a great topic. - You have suggested that human beings possess a language acquisition device, is this device a real entity or a metaphor? - Well, something like asking if the genetic makeup that leads to an immune system is a real system or a metaphor. I mean we ended up doing what you and I are... Well, back to my granddaughter and her imagined her chimpanzee as her pet. Let's assume you're both exposed to exactly the same data. One of them ends up doing what you and I are doing now, the other does nothing.
Suppose she also has a pet bee, okay. Well, the pet bee and my granddaughter exposed to the same visual data, in the case of the bee, they will end up with a complicated and intricate communication system, humans cannot duplicate it, it is beyond their cognitive abilities to communicate distance, height and quality of a flower, humans simply can't do that, but the bee does it reflexively. Well, when looking at facts like that, every scientist assumes that there is something special about my granddaughter, or about the bee and the chimpanzee using the same data to end up in very different states.
And unless they are miracles, that has to do with her genetic makeup. So, therefore, we can talk about a language acquisition device, a call acquisition device of a chimpanzee, a communication acquisition device of a bee. These are not metaphors, they are just descriptions of subcomponents of an organism that you are trying to understand, there is nothing metaphorical about them and there really shouldn't be anything controversial about them. I mean, unless one thinks that the bee, the chimpanzee, and my granddaughter should end up the same way with the same data, unless someone thinks that, then the answer should be obvious. - Is there any evidence of evolutionary change in the historical record of human language? - No, that's why I mentioned before that the current understanding, at least the current one, is that there is very little genetic variation among humans compared to other primates.
The meaning that can only mean that we're basically all sorts of cousins ​​who came from the same small reproductive group not too long ago, as if 100, 1,000, or 200,000 years were essentially nothing in evolutionary time. There hasn't been time for any evolution, no one knows exactly when language came on the scene, but probably around the same time, roughly give or take 100,000 years. And there just hasn't been time for any evolutionary change to occur, which is why my granddaughter says and I don't know why I'm focused on her. But my granddaughter, if she had grown up, say, in Tokyo, she would be speaking perfect Japanese because she is genetically identical to children growing up in Tokyo, in terms of this ability or probably almost all abilities.
So if that's true, there can't have been any significant evolutionary changes in the short history of language, it's too short. - This is the last one and it's a very open question, so feel free to ignore it or do whatever you want. "What are the details of current neuroscience results?" that influence his current biologically based theories of language. " - Well, currently neuroscience is providing some new ideas, not so much neuroscience as new techniques, for example, imaging, brain imaging that are providing quite interesting suggestive results. You know FMRI and things like that. You don't know exactly what to do with them because you know these are research systems that are simply not understood based on signals that are detected from the outside.
Some of the results are quite different, for example, the Proceedings of the National Academy of Sciences, just a couple ago. months, they published an article by Laura Patito, a researcher now in Montreal and at Dartmouth. It seems to have shown that sign language to the deaf. In children who use signs, the neural centers involved in the. Sign processing is almost identical, as far as you can see, identical to that of hearing babies who process auditory information. That's a very surprising result. I mean a little less surprising than it would have been 20 years ago because work has shown that these systems are very similar in many structural ways.
But that it is represented neurally in the same place is quite interesting, if true it would mean that whatever our Faculty of Language is probably independent of modality, of sensory modality, of some analytical computer system that can use one modality or another. The fact that, say, vision and hearing end up analyzed in the same place is a bit surprising, given what is known about the projection of sensory systems to the cortex. But that type of research is quite interesting and suggestive, and work of that type is being done in the future. As for neuroscience itself, let us remember that very little is known.
I mean, I quoted the encyclopedia, the current edition of the MIT encyclopedia of cognitive neuroscience points out that we don't understand the neurophysiology of even the most elementary calculations, which are used throughout the animal world, let's say for insect navigation. Even in the simplest ones, the neurophysiology is completely obscure. When you try to reach human language, you are somewhere in outer space. It's an important issue that may one day be understood, but it's quite remote. - Well, thank you very much for a most stimulating conference. (audience applauding) (upbeat music)

If you have any copyright issue, please Contact