Later this year, Nvidia will begin selling a new AI acceleration chip as part of its efforts to maintain its leadership in the computer revolution. The quicker chip could allow AI researchers to accelerate their research and construct more advanced AI models, particularly for difficult tasks like comprehending human language and flying self-driving automobiles.
Nvidia Chief Executive Jensen Huang announced the H100 “Hopper” chip in March, and it is slated to start delivering next quarter. The CPU contains 80 billion transistors and is 814 square millimeters in size, which is almost as large as is physically conceivable with today’s chipmaking technology.
The H100 competes with AMD’s MI250X, Google’s TPU v4, and Intel’s forthcoming Ponte Vecchio, which are all massive, power-hungry AI processors. The most common location for AI training systems is data centers, which are crammed with racks of computer equipment and connected with massive copper power cables.
Nvidia’s journey from a maker of graphics processing units for video games to an AI powerhouse is embodied in the new processor. The business achieved this by customizing GPUs for AI mathematics such as multiplying arrays of integers.
Circuitry for accelerating As AI technology appears in everything from iPhones to Aurora, the world’s fastest supercomputer, AI is becoming increasingly significant. Chips like the H100 are essential for activities like training an AI model to translate live speech into another language or automatically generating video subtitles. AI engineers can handle more demanding jobs like autonomous cars and speed up their experimentation with faster performance, but one of the major areas for development is language processing.