Meta, the parent company of Facebook, Instagram, and WhatsApp, has unveiled a research paper that outlines their new “Meta 3D Gen AI” system. This advanced system is designed to generate 3D models from text prompts, producing high-quality 3D assets complete with high-resolution textures and material maps in under a minute. The paper, authored by Meta's researchers, explains that Meta 3D Gen is an integrated system utilizing two AI-driven sub-systems: Meta 3D AssetGen and Meta 3D TextureGen. The system can interpret text prompts from users to produce AI-generated 3D content, including characters, props, and scenes. Additionally, users have the option to input an existing 3D mesh (the fundamental structure of a 3D model) and instruct the system to add texture to it. Meta elaborated that the 3D Gen AI system employs Physically-Based Rendering (PBR) techniques to create 3D content from scratch. The process starts with user input detailing the creation of a 3D structure and its design elements, including texture. Initially, the Meta 3D AssetGen system generates a basic "3D asset" based on the user’s prompt, producing a 3D mesh with texture and PBR material maps within approximately 30 seconds. In the subsequent phase, the Meta 3D TextureGen system enhances the quality of the texture and PBR maps for the 3D asset created in the first step, taking roughly 20 seconds. This sub-system can also function independently as a text-to-texture generator. When provided with an already generated 3D mesh and a text prompt specifying the desired texture, the system initiates the second phase automatically. For example, if a user inputs a prompt requesting the AI system to create a T-rex wearing a green wool sweater, the system first generates a 3D visual of the T-rex. It then adds the texture of the wool sweater along with other color details, demonstrating its capability to produce detailed and textured 3D models from simple text inputs.