Researchers at NVIDIA have developed a new tool that makes it simple to transform photographs into 3D objects. The business claims their product, NVIDIA 3D MoMa, enables creators of video games, concept artists, and architects to easily load items into a graphics engine for modification.
Inverse rendering, a method that may turn a collection of still photographs into a 3D representation of an item or a scene, is how the technology functions.
This concept is referred to as a type of “holy grail” for computer vision and computer graphics by David Luebke, vice president of graphics research at NVIDA.
According to him, the NVIDIA 3D MoMa rendering pipeline “uses the sheer computing horsepower of NVIDIA GPUs and the machinery of current AI to swiftly build 3D models that designers can import, modify, and expand without constraint in existing applications.” “The inverse rendering issue is framed as a GPU-accelerated differentiable component,” the statement reads.
In order to produce 3D representations of an item made of a triangular mesh with textured materials—what NVIDIA refers to as a common language used by 3D tools across multiple industries—3D MoMa collects a number of photographs.
Utilizing Industry Standard Tools, 3D MoMa
For instance, game companies frequently use sophisticated photogrammetry techniques to produce 3D things like these, which according to NVIDIA requires a lot of time and physical labor. While it is impressive, NVIDIA’s demonstration of how to quickly transform a series of images into 3D scenes earlier this year did not produce the triangular mesh that would have made those shots simple to modify.
On a single NVIDIA Tensor Core GPU, NVIDIA’s 3D MoMA alters this and produces triangular mesh models in about an hour. According to the business, the output is immediately compatible with 3D graphics engines and modeling tools that are already in use by producers in a variety of sectors.