Nvidia A100: The $10,000 Chip Driving The AI Gold Rush In Technology Industry

    44
    Nvidia A100: The $10,000 Chip Driving The AI Gold Rush In Technology Industry

    Nvidia’s A100 chips are becoming the must-have tools for the artificial intelligence (AI) industry. The chips, which have been repurposed from their original use in 3D graphics for games, have become one of the most critical tools for AI professionals. With a price tag of roughly $10,000 per chip, Nvidia takes 95% of the market for graphics processors used for machine learning. The A100 is ideally suited for the kind of machine learning models that power tools like Bing AI, ChatGPT and Stable Diffusion. In fact, these applications require hundreds or thousands of A100 chips to perform the necessary calculations and training for the neural network models. A single A100 on a card can be slotted into an existing server, but many data centers use a system that includes eight A100 chips working together. This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed.

    With Nvidia’s earnings report showing an 11% rise in data center sales, as a result of AI chip business growth, the company’s shares are up 65% so far in 2023, outpacing the S&P 500 and other semiconductor stocks. During the earnings call, Nvidia CEO Jensen Huang suggested that the recent boom in AI is at the center of the company’s strategy. Huang added that “the activity around the AI infrastructure that we built, and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof in the last 60 days.”

    Companies that find themselves with a hit AI product often need to acquire more A100 chips to handle peak periods or improve their models. These chips are not cheap, and in addition to a single A100 on a card that can be slotted into an existing server, many data centers use a system that includes eight A100 chips working together. The DGX A100 system has a suggested price of nearly $200,000, although it comes with the chips needed.

    The A100 is powering many AI applications that can write passages of text or draw pictures that look like a human created them. Companies like Microsoft and Google are fighting to integrate cutting-edge AI into their search engines, as billion-dollar competitors such as OpenAI and Stable Diffusion race ahead and release their software to the public. In fact, entrepreneurs in the AI space see the number of A100 chips they have access to as a sign of progress.

    Stability AI, the company that helped develop Stable Diffusion, reportedly has a valuation of over $1 billion and has access to over 5,400 A100 GPUs. However, this estimate doesn’t include cloud providers, which don’t publish their numbers publicly. As a result, it’s easy to see how the cost of A100s can add up. For example, an estimate from New Street Research found that the OpenAI-based ChatGPT model inside Bing’s search could require 8 GPUs to deliver a response to a question in less than one second. At that rate, Microsoft would need over 20,000 8-GPU servers just to deploy the model in Bing to everyone, suggesting Microsoft’s feature could cost $4 billion in infrastructure spending.

    Compared to other kinds of software, like serving a webpage, which uses processing power occasionally in bursts for microseconds, machine learning tasks can take up the whole computer’s processing power, sometimes for hours or days. This means companies need access to a lot of A100s to crunch terabytes of data quickly and to recognize patterns. After that, GPUs like the A100 are also needed for “inference”, or using the model to generate text, make predictions, or identify objects inside photos.