🔓 Remix Reality Insider: From World-Building to Living Worlds
Your premium drop on the systems, machines, and forces reshaping reality.
🛰️ The Signal
This week’s defining shift.
Generative AI is transforming immersive world development, not just how fast we build them, but what they can be.
New AI systems are no longer just speeding up production. They’re opening up new ways to imagine, design, and interact with virtual spaces. From text-to-3D scene generation to AI-assisted scripting and visual style control, creators can build rich, interactive environments in minutes, and those worlds can now evolve in ways that weren’t possible before.
This week’s spatial computing news surfaced signals like these:
- DeepMind Genie 3 — A real-time world model that generates interactive 3D scenes from text prompts, with consistent physics, visuals, and memory over minutes. Users can alter weather, objects, or characters mid-experience, with no 3D scans required.
- Meta Horizon Worlds AI tools — Assistants and style references that automate setup, scripting, and design consistency for VR and mobile worlds, lowering barriers for new creators.
Why this matters: Generative AI is moving immersive creation from manual asset assembly to dynamic, living worlds that respond and adapt. This changes who can create, how quickly they can iterate, and the kinds of experiences they can imagine.
đź§ Reality Decoded
Your premium deep dive.
Every new computing platform starts with a bit of tinkering. Snap’s latest Spectacles follow that path, with early projects on the fifth-gen glasses showing what AR might look like as a natural part of daily life. These experiments highlight what it takes to design for a world where the interface sits on your face.
One clear takeaway is the rise of embodied interaction. Palm-based navigation, hand-tracked games, and gesture-driven photography point to a shift where the body becomes the controller. Instead of tapping glass, we lift a hand to call up a map, throw an arm to hit a dartboard, or pinch the air to take a picture. This kind of input feels more instinctive and more social, turning technology into something that works with us, not just for us.
Another is the value of contextual computing. Whether it’s an AI plant companion that tells you when to water or the ability to look at a book and talk to it to learn about its contents, these prototypes use AR to blend utility with delight. They’re a reminder that wearables have to fit into life’s flow, not pull us out of it.
Finally, there’s the lesson of layered play. Multiplayer AR games, scavenger hunts, and gamified chores show how adding a digital layer can turn mundane spaces into something extraordinary. By reimagining the world as the game board, they hint at how everyday environments can become dynamic, shared experiences.
Key Takeaway:
The early Spectacles experiments are more than proof-of-concepts, they’re a blueprint. Designing for AR glasses means building around the body, the moment, and the shared space. Those who master this will define how we live in the post-smartphone era.
📡 Weekly Radar
Your weekly scan across the spatial computing stack.
❤️‍🩹 Fourier Introduces GR-3, a Humanoid Robot Focused on Emotional Interaction and Care
- GR-3 is a full-size “Care-bot” developed to support both functional tasks and emotional engagement across public, clinical, and personal settings.
- Why this matters: Most humanoids are built for tasks. Fourier is taking a different angle by focusing on emotional connection as the core feature. GR-3 shows they’re thinking through that at every level, from soft materials to expressive sensing and movement.
🏠ABB Robots and Cosmic Microfactories Rebuild Wildfire-Hit Homes in L.A.
- ABB and Cosmic are building homes in Pacific Palisades using a mobile robotic microfactory launched after the 2025 wildfires.
- Why this matters: This shows what’s possible when automation meets urgent need. Building homes in 12 weeks at a lower cost depends on spatial computing from robotics to real-time machine perception.
đźš— Humanoid Robot Achieves Milestone in Autonomously Opening Car Door
- AiMOGA Robotics' Mornine used reinforcement learning and onboard sensors to open a car door without scripts or a remote control.
- Why this matters: Mornine didn’t follow a script or get hard-coded for this task. She figured it out. That’s the real headline. Her robot brain, a trained AI system, learned how to spot a handle, reach for it, and open a car door she’d never seen before. And then she did it, for real, in a dealership.
đź‘“ Meta to Unveil Ultra-Realistic and Ultrawide VR Headsets at SIGGRAPH 2025
- Meta’s Reality Labs Research will debut two advanced VR prototypes—Tiramisu and Boba 3—offering major leaps in image quality and field of view.
- Why this matters: Meta's research projects show just how far we can push the limits of realism for VR. At the same time, it shows how difficult it is to combine human-level resolution and FOV into a wearable device fit for the mass market.
⚡ Enel Green Power Adopts VR Vision Training for Global Renewable Tech Teams
- VR Vision is building virtual reality training simulations for Enel Green Power technicians working on turbines, substations, and solar farms.
- Why this matters: Virtual training is gaining ground across industries, and Enel Green Power’s move signals that energy is no exception. In high-risk settings like turbines and substations, VR offers a safer, lower-stakes way to build hands-on skills.
🎥 Blackmagic Adds Immersive Video Tools to DaVinci Resolve for Apple Vision Pro
- DaVinci Resolve now supports editing, grading, VFX, and spatial audio for Apple Immersive Video.
- Why this matters: Spatial video has been waiting for its tipping point. With Blackmagic delivering both the capture and the post tools, the format finally has a full production pipeline. This could be the moment immersive video moves from experimental to executable.
📡 Nokia Introduces XR Radio Network Solution for Digital Twin Visualization
- Nokia has launched a new XR-based tool that enables mobile operators to view and manage digital versions of their radio networks.
- Why this matters: Spatial computing lets us see what was once invisible, airwaves, signal paths, service gaps, and turn that visibility into better decisions, stronger accountability, and new ways to grow the business.
🗺️ Niantic Spatial SDK Adds Meta Quest 3 Support with Passthrough Camera Access
- Niantic Spatial introduces beta support for Meta Quest 3, unlocking core spatial computing tools like VPS, live meshing, and object detection.
- Why this matters: The combination of Niantic Spatial with the Meta Quest transforms current headset mixed reality into something highly powerful for developers, which will lead to even more meaningful augmented reality experiences on the device.
🤖 OpenMind Raises $20M to Launch Hardware-Agnostic OS for Robots
- OpenMind closed a $20 million round led by Pantera Capital to scale OM1, its operating system for intelligent machines.
- Why this matters: While others double down on building the body, OpenMind is focused on the mind, and it is approaching this as an open, shared intelligence layer that spans the entire robotics ecosystem.
🌀 Tom's Take
Unfiltered POV from the editor-in-chief.
We often talk about physical AI changing how we move, deliver, and interact with the world, but what’s also becoming evident is that it’s building an entirely new media network in plain sight.
Coco Robotics’ sidewalk delivery bots wrapped in movie branding are just the start. The idea of turning robots into rolling billboards isn’t new, but now these machines are hyper-local, measurable, and interactive. You don’t just see them pass by your house. Instead, they are coming to your door and asking you to interact with them. That’s an entirely different kind of engagement from a static bus shelter ad or a car driving by with a large video screen.
Like delivery robots, autonomous vehicles are the next frontier in marketing. We’ve spent decades thinking of AVs as just transportation, but they’re also fully immersive, mobile environments. Imagine a Starbucks-branded car that serves drinks on the way to work, or an Xbox car where your commute becomes a playable game session. These are experiential campaigns on wheels, monetizing our idle travel time in ways traditional advertising couldn't imagine.
And then there are humanoid robots, which are on the horizon. Today, we think about them folding laundry or carrying groceries, but there’s nothing stopping them from being ad-supported. Imagine a free or subsidized robot helper that occasionally suggests a product, books a service, or runs a sponsored errand, mixed right into your daily interactions. Done right, it wouldn’t feel like a pop-up ad. It would feel like a personal recommendation from a trusted assistant.
The media industry has always followed attention. Physical AI is moving that attention into motion, on sidewalks, in vehicles, and soon in our homes. The real opportunity is in blending utility and storytelling at these new touchpoints while keeping the trust that makes them work.
🔮 What’s Next
3 signals pointing to what’s coming next.
- Mobile AR is accelerating in growth
Snap’s latest earnings show that mobile AR isn’t just established, it’s expanding its footprint. Daily AR engagement on Snapchat grew to over 350 million people in Q2, with Lens use climbing to 8 billion times per day. The developer base has topped 400,000 creators, producing more than 4 million Lenses, supported by easier creation tools and richer features. Commerce-focused AR is also gaining momentum, with Perfect Corp posting double-digit revenue gains from virtual try-on tools that convert more shoppers into buyers. Both of these earnings reports show how mobile AR is scaling in audience and impact. This not only underscores the immediate opportunity of AR today, but it also lays the groundwork for a smoother transition to consumer AR glasses. - Flat-screen content is stepping into XR
Broadcasters, streaming companies, and studios are increasingly using AR and VR to extend their IP beyond the screen and into the spaces where their audiences are. Netflix’s partnership with Sandbox VR brings hit shows like Squid Game into full-body, location-based VR experiences, letting fans inhabit the worlds they’ve only seen on TV. In the UK, Channel 5’s Seven Wonders series uses Snapchat AR to let viewers explore ancient landmarks in their living rooms while watching the show. These kinds of activations give audiences a more personal connection to the content, while offering media brands a path into the next computing platform. - Scaling Physical AI Starts with Trust and Safety
As autonomous systems move into mission-critical roles, enterprises are prioritizing technologies that make them both capable and trustworthy. Edge AI platforms like SiMa.ai keep perception and decision-making local, reducing latency while protecting sensitive data and meeting privacy requirements. Safety and control platforms like FORT Robotics ensure those same systems operate predictably, communicate securely, and fail safely in unpredictable environments. In this next phase of physical AI, the ability to prove safety, security, and reliability will be as important as performance.
🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.
📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!