YTread Logo
YTread Logo

Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED

May 02, 2024
What could we do now to improve the situation even more? Go over it again, but as if you didn't. We no longer need to check the last one because we know that number has risen to the top. Yeah, because he's actually risen to the top, so one and two, yeah, keep it right, two and six, keep it right, six. and three, then you change it, okay, we change or exchange that six and four, we change it again, okay, then four and six and seven, keep it good, seven and five we exchange it, okay and then I think, according to Your point, we're pretty close. pass one more time one and two Hold it 2 three hold it 3 four hold it 4 six hold it 6 five and then change it well, we will change this and now to your point, we don't need to bother with those who have already raised, now we are 100% sure that it is sorted, yes, and certainly the search engines of the world, Google and Bing, etc., probably don't keep web pages in order because it would be an incredibly long list when we're just trying to look up the data, but there's probably some algorithm underlying it. what they do and probably similarly, like us, we work a little in advance to organize things even if they are not strictly ordered in the same way so that people like you, me and others can find the same information.
harvard professor explains algorithms in 5 levels of difficulty wired
So what about social media? Can you imagine where the

algorithms

are in that world, like maybe, for example, like Tik Tok? the page for you is like because they are Rec type recommendations, right, it's like Netflix recommendations, except it's more constant because it's like every video you scroll is like a new recommendation basically and it's based on what you do. You already liked what you liked, you pre-saved what you're looking for, so I guess there's some kind of algorithm that figures out what to put on your forum page, I'm just trying to keep you presumably more engaged, so the better the algorithm will be.
harvard professor explains algorithms in 5 levels of difficulty wired

More Interesting Facts About,

harvard professor explains algorithms in 5 levels of difficulty wired...

The better your engagement, maybe the more money the company makes on the platform, etc., so it all feeds together, but what you're describing is actually more artificially intelligent, if I may, because presumably there's no one on Tik Tok. or any of these social media companies that say if Patricia likes this post then show her this post, if she likes this post then show her this other post because the code would grow infinitely and there is too much content for one user to use. programmer. having those kinds of conditionals, those decisions are made behind the scenes, so it's probably a little more artificially intelligent and in that sense, you have topics like neural networks and machine learning that really describe taking as input things like what you see, in what do you click what your friends see what they click on and they sort of try to infer from that what we should show Patricia or her friends next, okay, yeah, that makes the distinction make more sense now.
harvard professor explains algorithms in 5 levels of difficulty wired
I am currently a fourth-year PhD student at New York University. I do robot learning, so that's half and half. Robotics and Mission Learning sounds like you've dabbled with quite a few

algorithms

, so how do you research or invent algorithms? The biggest thing was just trying to think about the inefficiencies and also the connecting threads. The way I think about it is that, to me, the algorithm is not just about how to do something, but about doing something efficiently. Learning algorithms are practically everywhere now at CU Google. I would say that, for example, you are learning every day about articles like, oh, with what. links can be better than others and rerank them, there are recommendation systems around us, like content feeds and social networks, or you know, like YouTube or Netflix, what we watch is largely determined by these types of learning algorithms , today there are many.
harvard professor explains algorithms in 5 levels of difficulty wired
There are a lot of concerns around some machine learning applications and deepfakes where they can learn how I speak and learn how you speak and even how we look and generate videos of us. We're doing this for real, but you could imagine a computer synthesizing. eventually this conversation, but how do you even know what I sound like and what I look like and how to replicate that? All these learning algorithms that we talk about, very similar to what it contains is just a lot of data, so data. goes in something else comes out what comes out is whatever objective function you optimize for, like where the line is between algorithms that like games with and without AI.
I think that when I started my university studies, the current machine learning with AI was not synonymous Okay, and even in my bachelor's degree in AI class they learned a lot of classic algorithms for games like, for example, AAR search. That's a very simple example of how you can play a game without having learned anything. This is a lot, oh, you are. In the state of a game, you just look down, see what the possibilities are, and then choose the best possibility that you can see compared to what you think of when you think of a game like alpha zero, for example, or alpha star, or there are a lot of You know, they're fancy new machine learning agents that even you like to learn very difficult games like Go, and those are learned agents because they get better as they play more and more games and as they get more games, they somehow refine their strategy. based on the data that they saw and again this high level abstraction remains the same, you see a lot of data and you learn from that, but the question is what is the objective function that you are optimizing for, is it to win this game?
Is it forcing a tie or is it like opening a door in a kitchen, so if the world is very focused on supervised and unsupervised reinforcement learning now, as well as what is coming in the next 5 to 10 years, where is the world? I think this is what you are going to do. I don't want to use the word invasion, but that's what it feels like with algorithms in our everyday lives, like even when I was taking the train here, the trains are being routed with algorithms, but this has been around for as long as you know.
Probably like 50 years, but as I was coming here while I was checking my phone, those are different algorithms and you know they surround us, they're with us all the time and they make our life. best in most places in most cases and I think it's going to be a continuation of all of those and it feels like they're even in places you wouldn't expect and there's so much data about you and me and everyone else online and these dates. it's being conscious and analyzed and influencing the things that we see and here it seems like there's a kind of counterpoint that might be good for marketers, but not necessarily good for you and me as individuals, you know we're human beings, but for someone.
We might just be a pair of eyes that, you know, carry a wallet and are there to buy things, but there's so much more potential for these algorithms to improve our lives without you knowing that we want to change a lot in our lives. I'm Chris. Wiggins, associate

professor

of applied mathematics at Columbia. I'm also the chief data scientist at the New York Times. The New York Times' data science team develops and deploys machine learning for newsroom and business problems, but I would say the things we do. you mostly don't see it, but it could be things like the personalization algorithm recommends different content and data scientists do it, which is quite different from the phrase computer scientist, do data scientists still think in terms of algorithms that drive much of it?
Oh, absolutely, yes. In fact, in data science and academia, often the role of the algorithm is the optimization algorithm that helps you find the best model or the best description of a data set. In data science and industry, the goal is often focused on an algorithm that is converted into data. right product so that a data scientist in the industry can develop and implement the algorithm, which means not only understanding the algorithm and its statistical performance, but also all the software engineering related to systems integration, making sure that algorithm receive reliable information and have useful results.
Also, I would say that organizational integration, which is how a community of people like the set of people who work at the New York Times integrate that algorithm into their process, is interesting and I feel like AI-based startups are their whole R and certainly within the Academy. Are there connections between AI and the world of data science? Absolutely the algorithms that exist, can you connect those dots? You're right that AI as a field has really exploded. I would say in particular a lot of people experienced a chatbot that was really good today. When people say I AI, they're often thinking about big language models or they're thinking about generative AI or they might be thinking about a chatbot.
One thing to keep in mind is that a chatbot is a special case of generative AI, which is a special case of using large language models, which is a special case of using machine learning in general, which is what the most people understand by AI, you might have moments that are, um, what John kthy called M No Hands look results where you do a fantastic trick and you're I'm not quite sure how it worked. I think it's still very early. Large language models are still at the point of what might be called alchemy, where people are building large language models without a clear a priori idea of ​​what the correct design is for a The right problem is that many people try different things often in large companies where they can afford to have a lot of people try things to see what works, publish it and instantiate it as a product, and that in itself is part of the scientific process.
I also think so, a lot. well, science and engineering, because often you're building a thing and the thing does something amazing to a large extent, we're still looking for basic theoretical results about why deep neural networks generally work, why they're able to learn so well, they're huge billions of parameter models and it is difficult for us to interpret how they are able to do what they do. Do you think this is a good thing or something inevitable that we programmers and computer scientists and data scientists who are inventing these things can't? In fact, I explain how they work because I feel like an industry friend of mine, even when it's something simple and relatively familiar like autocomplete, they can't actually tell me why that name appears at the top of the list, whereas years ago, when these algorithms were more. deterministic and more procedural, you could even point to the line that made that name rise to the top, so is this a good or bad thing that we are losing control, maybe in some sense of the algorithm it has risks?
I don't know if I would say it's good or bad, but I would say there is a lot of scientific precedent. There are times when an algorithm works very well and we have a limited understanding of why it works or a model works very well and sometimes we have very little. understand why it works the way in the classes I teach I certainly spend a lot of time on the fundamental algorithms that have been taught in classes for decades, whether binary search, linear search, bubble selection or the like, but they are if I'm already to the point where I can open GPT chat, copy paste a bunch of numbers or words and say sort them for me.
Does it really matter how GPT chat is classified? Does it really matter to me as a user how software is classified? It's as if the fundamentals become more outdated and less important. Do you think you're now talking about the ways in which code and computing are a special case of technology? So to drive a car you don't necessarily need to know much about organic chemistry. even if organic chemistry is how the car works properly then you can drive it and use it in different ways without understanding much about the fundamentals, similarly with computing we are at a point where computing is as high a level as you. you know you can import pyit learn and you can go from zero to machine learning and in 30 seconds it depends on the level at which you want to understand the technology, where in the stack, so to speak, it is possible to understand it and do wonderful things and move forward. world without understanding it at the particular level of someone who might actually have originally designed the actual optimization algorithm.
I have to say that for many of the optimization algorithms there are cases where an algorithm works very well and we publish a paper and there is a proof in the paper and then years later people realize that the proof was wrong and We're still not sure why that optimization works, but it works very well or inspires people to create new optimization algorithms, so I think the point of understanding algorithms is. They are loosely linked to our progress in advancing thegrading algorithms, but they don't always necessarily have to require each other and for those students, especially or even adults, who are now thinking about going into computer science and programming and who were really excited about the idea of ​​moving forward in that address. until, for example, November 2022, when suddenly, for many people, it seemed like the world was changing and now maybe this is not such a promising path, it is not such a lucrative path.
Movies are tools like GPT chat. Perhaps not to get into the field, large language models are a particular architecture for predicting, say, the next word or a set of tokens. More generally, the algorithm comes into play when you think about how that film should be trained or also how to be well. tuned so that the P of GPT is a pre-trained algorithm, the idea is that you train a large language model on some text corpus which could be encyclopedias or textbooks or whatever, and then you might want to tune that model around some particular task or some particular subset of texts, so both are examples of training algorithms, so I would say that people's perception of artificial intelligence has really changed a lot in the last 6 months, particularly around November 2022, when people experienced really good chatbot technology.
It was already around before, academics had already been working with chat gpt3 before that and GPT 2 and gpt1 and for a lot of people it opened up this conversation about what artificial intelligence is and what we could do with this and what the potential benefits are. and wrong, like any other piece of technology, Cransberg's first law of technology, technology is neither good nor bad nor neutral. Whenever we have any new technology, we must think about its capabilities and the possible good and bad as with any area of ​​study. Algorithms offer a spectrum from the most basic to the most advanced and even if right now the most advanced of those algorithms feels out of reach because you just don't have that experience with each lesson you learn with each algorithm you study, That end is getting closer and closer, so before long it will be accessible to you and you will be at the most advanced end of the spectrum.

If you have any copyright issue, please Contact