Meta Introduces SAM 3D to Reconstruct Objects and Humans from a Single Image
- Meta debuts SAM 3D, introducing two new models for reconstructing 3D objects and human bodies from natural images.
- The models are available through the Segment Anything Playground and power features like Facebook Marketplace’s new View in Room.
Meta has launched SAM 3D, a new system for turning everyday images into 3D models. The release includes SAM 3D Objects for reconstructing physical scenes and objects, and SAM 3D Body for estimating human pose and shape. Meta says that these tools are designed to make it easier to work with 3D content in areas like robotics, media, science, and sports.
Source: Meta
SAM 3D Objects was trained on nearly one million images and over three million 3D mesh predictions. It reconstructs full-textured objects from a single photo, including those that are partially hidden or shown at odd angles. SAM 3D Body uses prompt-based controls like keypoints and segmentation masks, and outputs a detailed body mesh using Meta’s new Momentum Human Rig format. Both models were trained using real-world data and a staged process to improve accuracy.
Anyone can try SAM 3D in the newly released Segment Anything Playground by uploading images and generating 3D models. Meta is also using this technology in products, including the “View in Room” tool on Facebook Marketplace, letting users preview furniture and decor in their spaces using real photos.
🌀 Tom’s Take:
Fast 3D reconstruction from single images could make it easier to create and test content for practical use across spatial computing use cases.
Source: Meta
Disclosure: Tom Emrich has previously worked with or holds interests in companies mentioned. His commentary is based solely on public information and reflects his personal views.