With a new artificial intelligence approach, NVIDIA hopes to make building virtual 3D environments less painful. According to NVIDIA, GET3D is capable of creating 3D people, objects, and environments. The model should be rapid to create forms as well. According to the business, GET3D can produce about 20 items per second with just one GPU.
The model was trained using fabricated 2D photos of 3D forms obtained at various angles. Using A100 Tensor Core GPUs, NVIDIA claims that it only took two days to feed about 1 million photos into GET3D.
According to Isha Salian of NVIDIA, the model can produce things with “high-fidelity textures and sophisticated geometric elements.” According to Salian, the shapes created by GET3D “take the form of a triangular mesh, like a paper-mâché model, coated with a textured substance.”
As GET3D will produce the objects in suitable formats, users should be able to quickly import them into game engines, 3D modellers, and film renderers for editing. The development of dense virtual environments for video games and the metaverse might therefore become considerably simpler. Other use applications mentioned by NVIDIA include robotics and architecture.
According to the manufacturer, GET3D was able to produce sedans, trucks, race cars, and vans using a training dataset of automotive photos. After being taught using animal imagery, it can also produce foxes, rhinos, horses, and bears. As one might anticipate, NVIDIA states that “the more varied and detailed the output,” the larger and more varied the training set that is fed into GET3D.
Another NVIDIA AI product, StyleGAN-NADA, enables the application of several styles to an object via text-based instructions. You could give a car a burned-out appearance, turn a model of a house into a spooky place, or, as a movie demonstrating the technology implies, give any animal tiger stripes.
Future iterations of GET3D, according to the NVIDIA Research team that developed it, may be trained on actual photos rather than artificial ones. Instead of needing to concentrate on one item category at a time, it would be able to train the model on a variety of 3D forms at simultaneously.