YTread Logo
YTread Logo

How Nvidia Grew From Gaming To A.I. Giant, Now Powering ChatGPT

Apr 16, 2024
This is what hundreds of millions of players around the world play. It's a GeForce. This is the chip inside. For almost 30 years. Nvidia's chips have been coveted by gamers shaping what's possible in graphics and dominating the entire market since it first popularized the term graphics processing unit with the GeForce 256. Now its chips power something completely different. . ChatGPT has started a very intense conversation. He believes it is the most revolutionary thing since the iPhone. Venture capital interest in AI startups has skyrocketed. All of us working in this field have been optimistic that at some point the world at large would understand the importance of this technology.
how nvidia grew from gaming to a i giant now powering chatgpt
And it's really exciting that that's starting to happen. As the driving force behind big language models like ChatGPT, Nvidia is finally reaping rewards for its investment in AI, even as other chip

giant

s suffer in the shadow of US-China trade tensions and a decline in chip shortages that has weakened demand. But the California-based chip designer relies on Taiwan Semiconductor Manufacturing Company to manufacture almost all of its chips, leaving it vulnerable. The biggest risk is really US-China relations and the potential impact for TSMC. I mean, if I'm an Nvidia shareholder, that's really the only thing that keeps me up at night.
how nvidia grew from gaming to a i giant now powering chatgpt

More Interesting Facts About,

how nvidia grew from gaming to a i giant now powering chatgpt...

This isn't the first time Nvidia has found itself at the forefront of an uncertain emerging market. It has been on the brink of bankruptcy several times in its history when founder and CEO Jensen Huang bet the company on seemingly impossible ventures. All companies make mistakes and I make a lot of them. And some of them, some of them, put the company in danger. Especially in the beginning, because we were small and we were up against very, very large companies and we're trying to invent this new technology. We sat down with Huang at Nvidia's Silicon Valley headquarters to find out how he achieved this latest reinvention and got a behind-the-scenes look at all the ways it powers much more than just

gaming

.
how nvidia grew from gaming to a i giant now powering chatgpt
Nvidia, now one of the ten most valuable companies in the world, is one of the few Silicon Valley

giant

s that, 30 years later, still has its founder at the helm. I delivered the first of these inside an AI supercomputer to OpenAI when it was first created. Jensen Huang, 60, Fortune's Entrepreneur of the Year and one of Time's Most Influential People of 2021, immigrated to the United States. from Taiwan when he was a child and studied engineering at Oregon State and Stanford. In the early '90s, Huang met fellow engineers Chris Malachowsky and Curtis Priem at Denny's, where they talked about their dreams of enabling PCs with 3D graphics, the kind made popular with movies like Jurassic Park at the time.
how nvidia grew from gaming to a i giant now powering chatgpt
If we go back 30 years, at that time the PC revolution was just beginning and there was quite a bit of debate about what the future of computing is and how software should be run. And there was a large group, and rightly so, that believed that CPU or general purpose software was the best way to go. And it was the best way to go for a long time. However, we felt that there were a class of applications that would not be possible without acceleration. The friends launched Nvidia in a condo in Fremont, California, in 1993. The name was inspired by N.V. for the next version and I

nvidia

, the Latin word for envy.
They hoped to speed up computing so much that everyone would be green with envy. With over 80% of revenue, its core business remains GPUs. Generally sold as cards that plug into a PC's motherboard, they accelerate (add computing power) central processing units, CPUs, from companies such as AMD and Intel. You know, they were one of dozens of GPU manufacturers at the time. They're the only ones, them and AMD actually, that really survived because Nvidia worked so well with the software community. This is not a chip business. It's about discovering things from beginning to end. But at first his future was far from guaranteed.
Frankly, there weren't many apps for it at first, and we smartly chose one particular combination that was a hit. They were computer graphics and we applied them to video games. Now Nvidia is known for revolutionizing

gaming

and Hollywood with fast visual effects rendering. Nvidia designed its first high-performance graphics chip in 1997. Designed, not manufactured, because Huang was committed to making Nvidia a fabless chip company, keeping capital expenses very low by outsourcing the extraordinary expense of manufacturing the chips to TSMC. On behalf of all of us, you are my hero. Thank you. Nvidia wouldn't be here today if the other thousand fabless semiconductor companies weren't here if it weren't for the pioneering work that TSMC did.
In 1999, after laying off most of the workers and nearly going bankrupt doing so, Nvidia released what it claims was the world's first official GPU, the GeForce 256. It was the first programmable graphics card that allowed for custom lighting and shading effects. In 2000, Nvidia was the exclusive graphics supplier for Microsoft's first Xbox. Microsoft and Xbox came about exactly at the time we invented this thing called the programmable shader, and it defines how computer graphics are made today. Nvidia went public in 1999 and its shares remained largely stable until demand soared during the pandemic. In 2006, it released a suite of software tools called CUDA that would eventually propel it to the center of the AI ​​boom.
It is essentially a computing platform and programming model that changes the way Nvidia GPUs work, from serial computing to parallel computing. Parallel computing is: Let me take a task and attack it all at once using much smaller machines. Good? So that's the difference between having an army where you have one giant soldier who is able to do things very well, but one at a time, versus an army of thousands of soldiers who are able to take that problem and do it in parallel. So it's a very different computing approach. Nvidia's big steps haven't always been in the right direction.
In the early 2010s, it made unsuccessful moves into smartphones with its line of Tegra processors. You know, they quickly realized that the smartphone market was not for them, so they left that market. In 2020, Nvidia closed a highly anticipated $7 billion deal to acquire data center chip company Mellanox. But just last year, Nvidia had to abandon a $40 billion bid to acquire Arm, citing significant regulatory challenges. Arm is a major CPU company known for licensing its proprietary Arm architecture to Apple for iPhones and iPads, to Amazon for Kindles, and to many major automakers. Despite some setbacks, today Nvidia has 26,000 employees, a newly built polygonal headquarters in Santa Clara, California, and billions of chips used for much more than just graphics.
Think data centers, cloud computing, and most notably, artificial intelligence. We are in all the clouds created by all the IT companies. And suddenly, one day, you discover a new application that was not possible before. More than a decade ago, CUDA and Nvidia GPUs were the driving force behind AlexNet, what many consider the Big Bang moment of AI. It was a new, incredibly accurate neural network that blew away the competition during a prominent image recognition contest in 2012. It turns out that the same parallel processing needed to create realistic graphics is also ideal for deep learning, where a computer learns on its own. itself instead of depending on it. a programmer's code.
We had the good sense to put the entire company behind this. We saw early on, about a decade ago, that this way of making software could change everything, and we changed the company from the bottom up to the top and sideways. Every chip we made focused on artificial intelligence. Bryan Catanzaro was the first and only employee of Nvidia's deep learning team six years ago. Now there are 50 people and they continue to grow. For ten years, Wall Street asked Nvidia: why are you making this investment and no one is using it? And they valued it at $0 in our market cap.
And it wasn't until around 2016, ten years after CUDA came out, that suddenly people understood that this is a dramatically different way of writing computer programs and that it has transformative speedups that then produce revolutionary results in artificial intelligence. So what are some real-world applications for Nvidia's AI? Healthcare is a big area. Think much faster drug discovery and DNA sequencing that takes hours instead of weeks. We were able to achieve the Guinness World Record on a genomic sequencing technique to actually diagnose these patients and manage one of the patients in the trial for a heart transplant. A 13-year-old boy who is improving today and then also a three-month-old baby who suffered epileptic seizures and who could be prescribed anti-seizure medication.
And then there's art powered by Nvidia AI, like Rafik Anadol's creations that cover entire buildings. And when cryptocurrencies began to flourish, Nvidia GPUs became the coveted tool for mining the digital currency. Which is not really a recommended use, but it has created, you know, problems because, you know, cryptocurrency mining has been a boom or bust cycle. So gaming cards come out of stock prices, bids rise and then when the crypto mining boom crashes there is a big drop on the gaming side. Although Nvidia created a stripped-down GPU made just for mining, it didn't stop crypto miners from purchasing GPUs for gaming, which sent prices skyrocketing.
And while that shortage has ended, Nvidia raised eyebrows among some gamers last year by pricing its new 40-series GPUs much higher than the previous generation. There is now too much supply and the most recently reported quarterly gaming revenue was down 46% from a year ago. But Nvidia still beat expectations in its most recent earnings report, thanks to the rise of AI, as tech giants like Microsoft and Google fill their data centers with thousands of Nvidia A100s, the engines used to train large language models. like ChatGPT. When we ship them, we do not ship them in packages of one.
We ship them in packs of eight. With a suggested price of almost $200,000. Nvidia's DGX A100 server board has eight Ampere GPUs that work together to enable things like ChatGPT's incredibly fast and amazingly human-like responses. I was trained on a massive text dataset that allows me to understand and generate text on a wide range of topics. Companies struggling to compete in generative AI publicly brag about how many Nvidia A100s they have. Microsoft, for example, trained ChatGPT with 10,000. It is very easy to use their products and add more computing power. And once you add that computing capacity, computing capacity is basically the currency of the valley right now.
And the next generation of Ampere, Hopper, has already started shipping. Some uses of generative AI are real-time translation and instant text-to-image rendering. But this is also the technology behind disturbingly compelling and some say dangerous videos, texts and audios. Is there any way that Nvidia is guarding against some of these bigger fears that people have or putting in safeguards? Yes, I think the safeguards we are building as an industry around how AI will be used are extraordinarily important. We're trying to find ways to authenticate content so we can tell if a video was actually created in the real world or virtually.
The same goes for text and audio. But being at the center of the generative AI boom doesn't make Nvidia immune to broader market concerns. In October, the United States introduced sweeping new rules banning exports of cutting-edge AI chips to China, including Nvidia's A100. About a quarter of its revenue comes from mainland China. How to calm investors' fears about new export controls? Well, Nvidia's technology is export controlled, it is a reflection of the importance of the technology we make. The first thing we have to do is comply with the regulations, and it was a turbulent month, you know, or so, when the company went backwards to redesign all of our products to comply with the regulations and still be able to serve commercial customers that we have in China.
We can serve our customers in China with regulated parts and provide them with excellent support. But perhaps an even bigger geopolitical risk for Nvidia is its dependence on TSMC in Taiwan. There are two issues. First, will China take over the island of Taiwan at some point? And two, is therea viable competitor to TSMC? And as of now, Intel is aggressively trying to get there. And you know, their goal is 2025. And we'll see. And this is not just a risk for Nvidia. This is a risk for AMD, for Qualcomm and even for Intel. This is one of the main reasons why the United States passed the Chip Act last summer, which sets aside $52 billion to incentivize chip companies to manufacture on American soil.
Now TSMC is spending $40 billion to build two chip manufacturing plants in Arizona. The fact of the matter is that TSMC is a really important company and the world doesn't have more than one of them. It is imperative for us and for them that they also invest in diversity and redundancy. And will you move some of your manufacturing to Arizona? Oh, absolutely. We will use Arizona. Yes. And then there's the chip shortage. As it comes to an end and supply catches up with demand, some types of chips are seeing a price drop. But for Nvidia, the rise of chatbots means that demand for its AI chips continues to grow, at least for now.
Look, the biggest question for them is how can they stay ahead? Because your customers can also be your competitors. Microsoft can try to design these things in-house. Amazon and Google are already designing these things internally. Tesla and Apple are also designing their own custom chips. But Jensen says competition is a net good. The amount of energy the world needs in the data center will grow. And you can see from the recent trends that it is growing very quickly and that is a real problem for the world. While AI and ChatGPT have been generating a lot of buzz for Nvidia, it's far from Huang's only focus.
And we take that model and we put it into this computer and that's a self-driving car. And we take that computer and we put it here, and that's a little robot computer. Like the ones used on Amazon. That's how it is. Amazon and others use Nvidia to power robots in their warehouses to create digital twins of massive spaces and run simulations to optimize the flow of millions of packages each day. Drive units like these at Nvidia's robotics lab are powered by Tegra chips that were once a flop in mobile phones. They are now used to power the world's largest e-commerce operations.
Nvidia's Tegra chips were also used in Tesla Model 3s from 2016 to 2019. Tesla now uses its own chips, but Nvidia is making self-driving technology for other automakers like Mercedes-Benz. That's why we call it Nvidia Drive. And basically, Nvidia D rive is a scalable platform, whether you want to use it for simple ADAS, assisted driving for your emergency braking warning, pre-collision warning or just lane keeping for cruise control, all the way up to a robotaxi where he is doing it. everything, driving anywhere in any condition, any type of weather. Nvidia is also trying to compete in a completely different field, launching its own data center CPU, Grace.
What would you say to gamers who wish they had focused exclusively on the core gaming business? Well, if it weren't for all our work in physics simulation, if it weren't for all our research in artificial intelligence, what we did recently with GeForce RTX wouldn't have been possible. Released in 2018, RTX is Nvidia's next big step in graphics with a new technology called ray tracing. To take computer graphics and gaming to the next level, we had to reinvent and alter ourselves, basically simulating the paths of light and simulating everything with generative AI. Then we calculate one pixel and imagine with AI the other seven.
It's really quite surprising. Imagine a puzzle and we give you one of eight pieces and somehow the AI ​​fills in the rest. Ray tracing is currently used in almost 300 games, such as Cyberpunk 2077, Fortnite, and Minecraft. And Nvidia Geforce GPUs in the cloud enable high-quality streaming of over 1,500 games to almost any PC. It's also part of what enables simulations, the modeling of how objects would behave in real-world situations. Think about weather forecasting or autonomous driving technology based on millions of kilometers of virtual roads. It's all part of what Nvidia calls Omniverse, which Huang points to as the company's next big bet.
We have over 700 customers testing it now, from the automotive industry to logistics warehouses and wind turbine plants. So I'm very excited about the progress there. And it represents probably the largest container for all of Nvidia's technology: computer graphics, artificial intelligence, robotics and physics simulation, all in one. I have high hopes for it.

If you have any copyright issue, please Contact