Amazon (AMZN) announced two new AI chips on Tuesday developed in partnership with Nvidia (NVDA): the Graviton4 and Trainium2, developed to power the AWS platform and large language models respectively
Additionally, Amazon unveiled Q, a business-focused chatbot to enhance productivity via task tracking and information lookup capabilities. However, as Yahoo Finance Tech Editor Dan Howley discusses, its functionality strongly mirrors Microsoft’s (MSFT) existing Copilot.
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
Video transcript
DAN HOWLEY: So let's talk about what Amazon is doing on the side they announced two new chips today-- it's the Graviton4 and the Trainium2. Real sci-fi sounding names. Imagine there's a light saber in there somewhere. The Graviton4, Amazon says that's 30% better computer performance, 50% more cores, 75% more memory bandwidth, yada, yada, yada.
Long story short, it's a better chip than the Graviton3, which they had already had. Basically, this helps power AWS instances. If you use AWS, you're going to use a Graviton chip, depending on the application. But more or less, you would use that. So Graviton4, meaning better performance overall for AWS that's good for their customers because faster is better.
The Trainium2, though, is an important chip here because this is where you get that kind of weird mix up between them and NVIDIA, who's using what, what you're going to use. So the idea here is that this is supposed to be better for training, better for large numbers of chips they have what's called an EC2 UltraCluster. They said that has up to a 100,000 chips involved in it.
Basically, it's great for training foundation models and large language models. And the important thing here, it saves on energy efficiency, which when we talk about these things, we're like, oh, all these chips, it's great. Not really, they take up a lot of energy. It takes a lot to cool them. It takes a lot to power them. That's a huge drain on resources for companies, as well as not necessarily great for the environment when they're all talking about how they want to be environmentally friendly. But I digress.
The big thing here is, yes, they are working on their own chip, but they're also working on NVIDIA. Why are they doing this? It really comes down to customer choice. They want to be able to say to a customer, do you want to use the NVIDIA chip? Do you want to use an AMD chip? Do you want to use our chip? Which one makes more sense for you? And perhaps, one is more expensive than the other.
The NVIDIA chips, themselves, are tens of thousands of dollars. So for a company to purchase those and then recoup that cost, you're going to have to rent that out for a little bit more money. So the development of their own chips also incredibly expensive. But then to be able to say, look, hey, we're not NVIDIA, but you can save a little bit on that energy and that'll help you as well. So they're really doing two things at the same time. It's not at odds to be doing their own chip, as well as the NVIDIA chip. People want choice.
- And that wasn't the only headline too, Dan. There was-- they announced this new chatbot queue. What's the story with there?
DAN HOWLEY: We-- Jared Blikre pointing out queue, the "Star Trek" character.
- The "Star Trek," yeah.
DAN HOWLEY: Yes. He was--
- Was he real or was he an AI? Now, I can't even remember.
DAN HOWLEY: That's a-- or an alien. I think he was an alien. My friends, a huge Trekkie, if he sees this, he's going to punch me in the face. But he was in the very last episode of the next generation, which is an excellent episode. Now, I hate myself.
This is a chatbot--
- I love you, Dan.
DAN HOWLEY: Thank you. I appreciate that. This is a chatbot for businesses themselves. So it's not going to be something that you can jump on like Microsoft's copilot, formerly Bing chat, or Google's Bard. This is something that businesses are going to use to help develop their own copilots or own applications. They're going to basically load it up where it is loaded up with Amazon's own AWS data. So it's trained on that information, and then you can take what you have and run it through that.
So if you have to ask it questions, if you want it to do a specific task for you, you can do that in plain English, plain-- you don't have to be a techie to do it. Interestingly, they were pooh poohing OpenAI for a second. Our Allie Garfinkle, who was there covering this, was excellent to point that out. They said, you don't want a company that relies on one AI business. So basically, needling Microsoft there.
But Amazon's product is essentially what Microsoft is already offering with their various co-pilots. They're trying to provide the same kind of service for AWS. The reason being, this provides a bigger landscape for customers to jump on. And so if Microsoft is doing it, and you can see the growth that they're getting out of this, Amazon has to get on it because growth at AWS has been a big issue for them, especially since it's got the best margins for the business.
So the fact that they're adding this makes complete sense. You can expect them now to really start pushing this. And you would expect analysts to be asking about it on the next earnings call. AI has been all that people can talk about in 2023. We're in almost December. It's not going to go away next year. Just because the clock strikes midnight on the first or on the 31st doesn't mean that AI is going away. It's going to continue to rock and roll for these companies.