Meta Unveils AssetGen 2.0 to Push Boundaries of Text-to-3D Generation

Meta Unveils AssetGen 2.0 to Push Boundaries of Text-to-3D Generation
Source: Meta Horizon Blog
  • The upgraded model generates high-fidelity, textured 3D assets from simple text or image prompts.
  • Designed to power Horizon and Avatar platforms, AssetGen 2.0 marks a leap forward in generative 3D creation.

Meta has announced AssetGen 2.0, its next-generation foundation model for 3D content creation, capable of turning text and image prompts into high-quality 3D assets with production-ready textures. The system consists of two specialized models: one for generating detailed 3D meshes using a single-stage diffusion architecture, and another for producing view-consistent, high-resolution textures via a new TextureGen pipeline.

Building on its predecessor, AssetGen 1.0, this updated model is trained on a large corpus of 3D assets and introduces substantial improvements in geometric consistency, visual fidelity, and texture in-painting — all in a single-stage diffusion process that dramatically boosts output quality.

Meta is already using AssetGen 2.0 internally to create content for Horizon and Avatar-based platforms, and plans to release the tool to creators later this year. The company also teased future capabilities, including auto-regressive generation of full 3D scenes, where environments are built piece by piece from simple input prompts.

The ultimate goal: to make 3D asset creation as accessible as 2D image generation — unlocking new creative workflows for designers, developers, and digital worldbuilders.


🌀 Remix Reality Take:
This is Meta’s boldest step yet toward a text-to-3D future. AssetGen 2.0 doesn’t just generate models — it lays the foundation for spatial storytelling powered by simple language and limitless imagination.


Source: Meta Horizon Blog

© 2025 Remix Reality LLC. All rights reserved.