The AWS cloud unit of Amazon has revealed plans to provide clients with access to Nvidia’s most recent chips in addition to new chips that they may use to develop and execute artificial intelligence applications.
Amazon Web Services is making an effort to establish itself as a cloud provider by offering a range of affordable options. It will, however, provide more than simply low-cost Amazon-branded products. Similar to its online store, Amazon’s cloud will provide premium goods from other vendors, such as the highly sought-after GPUs from top AI chipmaker Nvidia.
Since startup OpenAI debuted its ChatGPT chatbot last year and amazed people with its ability to summarise material and write creating that seems like it was written by a person, the demand for Nvidia GPUs has increased dramatically. As businesses raced to integrate similar generative AI capabilities into their products, it caused a shortage of Nvidia’s chips.
Amazon may be able to defeat Microsoft, its main rival in the cloud computing space, with its two-pronged strategy of producing its own processors and granting users access to Nvidia’s most recent CPUs. Microsoft used a similar strategy earlier this month when it unveiled the Maia 100, the company’s first AI processor, and said that Nvidia H200 GPUs will be available on the Azure cloud.
The news was revealed on Tuesday at the Reinvent conference in Las Vegas. AWS claimed that it will provide users with access to Nvidia’s latest H200 AI graphics processing processing units. It also unveiled the general-purpose Graviton4 CPU and its new Trainium2 artificial intelligence chip.
The H100 processor, which OpenAI used to train its most sophisticated big language model, GPT-4, has been replaced with the new Nvidia GPU. As a result of competition for a limited supply of the chips among big businesses, startups, and government organisations, there is also a considerable demand for their rental from cloud providers such as Amazon. According to Nvidia, the H200 will generate images almost twice as quickly as the H100.
AI chatbots like OpenAI’s ChatGPT and competitors are powered by AI models, which are trained on Amazon’s Trainium2 chips. Anthropic, an OpenAI rival, and startup Databricks intend to develop models using the new Trainium2 chips, which will have four times the performance of the original model, according to Amazon.
Based on Arm architecture, the Graviton4 CPUs use less energy than AMD or Intel chips. With 30% more performance than the current Graviton3 chips, Graviton4 is expected to enable improved output for a lower price, according to AWS. Central bankers have raised interest rates due to unusually strong inflation. Consider switching to Graviton if your company wants to continue utilising AWS but save costs in order to better manage the economy.
According to Amazon, more than 50,000 AWS users are currently utilising Graviton chips.
Lastly, AWS said that it will run over 16,000 Nvidia GH200 Grace Hopper Superchips, which include Nvidia GPUs and Nvidia’s Arm-based general-purpose CPUs, as part of its expanding partnership with Nvidia. This infrastructure will be available to both AWS customers and Nvidia’s internal research and development team.
Since releasing its EC2 and S3 cloud computing and data storage services in 2006, AWS has launched more than 200 cloud products. Not every one of them has succeeded. A select number are discontinued and some go for extended periods without updates, allowing Amazon to reallocate resources. Nonetheless, the business keeps funding the Trainium and Graviton programs, indicating that Amazon recognises market demand.
Release dates for virtual machine instances based on Trainium2 silicon and Nvidia H200 CPUs were not disclosed by AWS. Before Graviton4 virtual machine instances go on sale in a few months, customers can begin testing them immediately.