Nvidia’s Flagship AI Chip Is Reportedly 4.5x Faster Than The Previous Champ

    2
    Nvidia’s Flagship AI Chip Is Reportedly 4.5x Faster Than The Previous Champ

    In the widely used MLPerf benchmarks, Nvidia’s forthcoming H100 “Hopper” Tensor Core GPU broke previous performance records during its launch, achieving scores up to 4.5 times quicker than the A100, the company’s current fastest production AI hardware.

    The “inference” workloads measured by the MPerf benchmarks (officially known as “MLPerfTM Inference 2.1”) show how well a chip can apply a previously learned machine learning model to fresh data. The MLPerf benchmarks were created in 2018 by a consortium of business organisations known as the MLCommons to provide prospective clients with a uniform criterion for assessing machine learning performance.

    The H100 performed very well in the BERT-Large benchmark, which evaluates the effectiveness of natural language processing using Google’s BERT model. This particular outcome is credited by Nvidia to the Transformer Engine of the Hopper architecture, which speeds up training transformer models in particular. This suggests that future natural language models like OpenAI’s GPT-3, which can create literary works in a variety of genres and engage in conversational discussions, might be accelerated by the H100.

    The H100, according to Nvidia, is a top-tier data centre GPU processor made for AI and HPC applications including image recognition, extensive language models, picture synthesis, and more. Although it is still in development, many anticipate it will overtake the A100 as Nvidia’s top data centre GPU. Concerns that Nvidia would not be able to produce the H100 by the end of 2022 since part of its development is taking place there arose after US government limitations on exporting of the chips to China were implemented last week.

    The project seems to be back on track for the time being after Nvidia stated in a second Securities and Exchange Commission filing last week that the US government will permit continuing development of the H100 in China. The H100 will be accessible “later this year,” according to Nvidia. In the years to come, the H100 might power a wide range of ground-breaking AI applications, if the A100 chip from the previous generation is any indicator.