Meta Launches AssetGen 2.0 to Power Next-Gen 3D Creation

Meta Launches AssetGen 2.0 to Power Next-Gen 3D Creation
Source: Meta Horizon Blog
  • New model generates detailed 3D shapes and high-quality textures from text and image prompts
  • Built with a single-stage diffusion system and used internally for Horizon; wider release planned this year

Meta has introduced AssetGen 2.0, its latest tool for generating complete 3D assets from simple text or image prompts. The system combines two models—one for producing detailed 3D meshes and another for adding production-ready textures—to streamline asset creation for creators.

The update shifts to a single-stage diffusion approach, which helps preserve geometry and surface detail. Texture generation has also been upgraded with better resolution, view consistency, and in-painting. Meta trained the model on a large dataset of 3D assets.

AssetGen 2.0 is already used inside Meta; a broader rollout to Horizon creators is planned for later this year. Meta says the tool is a step toward making 3D creation as accessible as 2D, opening new possibilities for artists, designers, and developers on its Horizon and Avatar platforms. Future updates will allow it to generate full 3D scenes by sequentially creating individual objects from simple prompts.


🌀 Tom's Take:

3D asset creation can be costly and time-consuming, which can often prevent creators and brands from developing immersive content. Advances like this in GenAI are instrumental in lowering the barrier and inviting more people to create.


Disclosure: Tom Emrich has previously worked with or holds interests in companies mentioned. His commentary is based solely on public information and reflects his personal views.

Source: Meta Horizon Blog