🔓 Remix Reality Insider: One model scales. One device won't.

🔓 Remix Reality Insider: One model scales. One device won't.
Source: Midjourney - generated by AI

Your premium drop on the programmable future of reality.

đŸ›°ïž The Signal

This week’s defining shift.

The dream of a single, do-everything pair of smartglasses is fading. At AWE USA 2025, Google and Qualcomm made it clear that the future of head-worn wearables will follow a two-track path: spatial computers and smartglasses.

This includes multiple options under the smartglasses umbrella, including wired and wireless AR glasses, and a growing emphasis on lightweight, assistant-driven AI glasses designed for everyday use.

  1. Spatial Computers – Mixed Reality headsets (Apple Vision Pro, Meta Quest 3, Project Moohan)
  2. Smartglasses – AI glasses (Meta Ray-Ban, Oakley Meta HSTN) and AR glasses (Xreal One, Rokid AR Spatial)

Just like laptops and smartphones coexist today, spatial computers and smartglasses will evolve in parallel, not into one another. And the pace is accelerating: Snap is preparing a consumer debut of its AR Specs in 2026, Meta’s AI glasses with Oakley are targeting athletes, and Samsung's Project Moohan mixed reality headset is reportedly arriving this fall.

The race for your face is on, and no single device will win it alone.


🧠 Reality Decoded

Your premium deep dive.

One Model, Many Machines. Physical AI is going general, and the implications are massive. This week, two signals stood out: 1X’s Redwood and Wayve’s Embodied AI.

1X’s Redwood model powers NEO Gamma, a humanoid robot that moves through homes, grasps unfamiliar objects, and responds to voice commands, all with full-body coordination and no cloud dependency. It doesn’t just know what to do. It figures out how, in real time.

Meanwhile, Wayve’s Embodied AI is driving cars through London’s chaotic streets without fixed maps or scripted logic. It learns by doing, like humans do, and adapts across environments, vehicles, and use cases.

This marks a shift from bespoke models tied to single tasks or hardware to generalized intelligence that can operate across many machines. What we’re seeing is physical AI finally stepping into its “platform era,” where one model can power robots that walk, drive, or deliver and learn across them all.

Key Takeaway:
We’re moving from robots that are programmed to robots that are pretrained. Like language models trained on human conversation, these motion models learn how bodies, robotic or human, move through space and solve problems.

📡 Weekly Radar

Your weekly scan across the spatial computing stack.

đŸ€– Hexagon launches AEON Humanoid Robot [Physical AI]
AEON will support manufacturing, aerospace, transportation, warehousing, and logistics.

🚚 Meta and Oakley debut performance AI eyewear [Physical AI]
Oakley Meta HSTN is a new line of Performance AI glasses designed with athletes in mind.

đŸ•¶ïž Vuzix raises $5M for waveguide production [Immersive Interfaces]
Vuzix is expanding optical manufacturing with new funding from Quanta, aiming to support the next wave of headworn displays.

☕ NestlĂ© Scales content with AI twins [Simulated Worlds]
Nestlé is deploying AI and digital twins to localize packaging content across 60+ markets in minutes, not weeks.

👠 Wanna Adds AR Try-On for Heels [Perception Systems]
Wanna and Perfect Corp debut AR try-on for high heels, expanding virtual footwear beyond sneakers.


🌀 Tom's Take

Unfiltered POV from the editor-in-chief.

At AWE, the conversation wasn’t just about XR anymore; it was about the whole stack of spatial computing. Robots took the stage, and presenters made it clear: the same models powering XR content are now driving physical intelligence.

Spatial computing is no longer a collection of verticals. What once felt like separate domains, XR, robotics, simulation, and perception, are now clearly overlapping. The stack is becoming unified and horizontal. It’s about how intelligence flows through machines, environments, and applications, without needing to live in just one of them.

The future isn’t one interface, it’s many machines, running on one model, across a deeply spatial world.


🔼 What’s Next

3 signals pointing to what’s coming next.

  1. The AI Glasses Market Will Split by Compute Strategy
    Expect clear tiers to emerge between cloud-first AI glasses (relying on server-based processing) and local-first glasses (like those powered by Qualcomm’s on-device GenAI), with tradeoffs in privacy, speed, and power.
  2. Spatial Infrastructure Will Quietly Take Center Stage
    As hardware diversifies, the long game is being played in the background, with companies investing in geospatial context, simulation frameworks, and perception layers that make all these devices useful. From Niantic’s geospatial model to Nestlé’s digital twin pipelines, the real value is shifting to the invisible infrastructure that helps systems understand the world around them.
  3. Smartglasses Will Specialize by Use Case
    As the race for your face accelerates, smartglasses will increasingly differentiate by intent, such as fitness, gaming, and content creation, not just hardware. Oakley Meta HSTN isn’t just a new frame; it’s a signal that smartglasses will compete on purpose, not form factor. The category is shifting toward role-specific design: built for athletes, assistants, or creators, not to be one device that does it all.

🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.

🚀 Know someone who should be reading this? Send them to remixreality.com and invite them to join the inner circle.