🔓 Remix Reality Insider: Offloading Risk with AI & AR
Your premium drop on the systems, machines, and forces reshaping reality.
🛰️ The Signal
This week’s defining shift.
Spatial computing is stepping onto the frontlines of human safety.
From shipyards to explosive industrial zones, machines are taking on new roles to protect people, either by enhancing their capabilities in the case of AR or by stepping in when the risks are too high through the power of robotics.
This week’s news surfaced signals like these:
- ABS and Persona AI are piloting humanoid robots in shipyards to handle inspection tasks in tight, complex spaces, gathering the certification data needed to prove they can safely take on work that puts humans at risk.
- ANYbotics secured fresh investment to scale its Ex-certified ANYmal X robot, designed to autonomously inspect hazardous industrial environments where it’s unsafe for people to go.
- Augmentir is combining AR overlays with AI guidance to cut onboarding times and improve task accuracy, helping workers avoid costly errors and accidents.
Why this matters: Industrial environments are some of the most hazardous workplace environments. By pairing humans with machines, spatial computing is being used as a safety net. This is achieved through augmenting human workers with better tools or by taking their places in situations too dangerous to face.
đź§ Reality Decoded
Your premium deep dive.
Smartglasses may be the next big leap after smartphones, but bringing virtual content into our lives still relies on something physical: brick-and-mortar retail.
Here are a few reasons physical retail matters in taking smartglasses from early adoption to the mass market.
Fit and Comfort
Glasses are personal. Prescription frames take into consideration interpupillary distance, nose bridge, and lens alignment to fit and function properly. AR glasses add even more complexity with sensors, displays, and new controls like neural interfaces. Retail can provide expert fittings to ensure that users leave with a device that works.
Education and Support
Glasses are familiar, but connected eyewear is not. Beyond the form factor, smartglasses require users to learn new forms of input, including gestures, voice, or gaze, along with a new set of use cases. Retail staff play a critical role in guiding users through a smooth first experience, which boosts confidence, drives sales, and reduces returns.
Premium Experience
At a price point of $700 or more, smartglasses cost far more than regular eyewear. At that price, customers expect a premium service. Physical retail delivers a high-touch, personal experience, including appointments that feel exclusive with 1-1 guidance. The experience is designed to make people more likely to buy.
Building Trust and Momentum
Seeing smartglasses in stores makes them feel real and ready for everyday use, not something from the future you read about. Physical retail builds trust by letting people touch and try devices at familiar branded locations. Consumers who pass by the store also play a role, spreading word of people trying smartglasses and helping build momentum for adoption.
Key Takeaway:
The irony of smartglasses is that these devices that create a portal to a virtual reality need physical retail to gain momentum. Stores aren't just a sales channel. They are an essential ingredient to the success of this post-smartphone category.
📡 Weekly Radar
Your weekly scan across the spatial computing stack.
🥗 Chef Robotics and Proseal Partner on Flexible Meal Line Automation
- Chef Robotics and Proseal have partnered on a flexible system that automates both meal assembly and packaging for fresh and frozen foods.
- Why this matters: High-mix food production needs robots that can keep up, switching tasks fast, handling different ingredients, and staying flexible without stopping the line. This partnership shows AI and modular automation are finally catching up to that reality.
🏬 Meta Lab Retail Store Returns to LA, Adds Pop-Ups in Las Vegas and NYC
- The spaces will offer demos of AI glasses and headsets, including Meta Ray-Ban Display and Meta Neural Band.
- Why this matters: Physical retail will be critical to the adoption of Meta’s glasses lineup, especially products like the Ray-Ban Display and Neural Band that require education and fit for a great user experience.
🎓 Purdue University Opens Research Facility and Certificates for Spatial Computing
- Purdue University opened a new spatial computing hub powered by Apple Vision Pro to support immersive research and training.
- Why this matters: Purdue is doubling down on spatial computing, giving its students a head start in the next wave of technology. With access to Apple Vision Pro, they’ve got a real edge, and it shows how important partnerships with industry are for pushing immersive tech forward in schools.
👓 Crowdfunders Flock to Rokid’s Lightweight AR Glasses, Surpassing $2M
- The device features dual Micro LED waveguide displays, an open AI ecosystem, and weighs significantly less than Meta’s Ray-Ban Display glasses, according to the company.
- Why this matters: $2 million in pledges in under three weeks shows there’s real momentum behind smart glasses. Rokid is betting on a lighter pair that works outside a closed ecosystem.
🌍 Nilo Raises $4M to Expand AI-Native 3D Creation Platform
- The platform targets Gen Alpha with AI-powered, browser-based tools for creating multiplayer 3D worlds from natural prompts.
- Why this matters: Gen Alpha has grown up in virtual worlds, and making creative tools feel playful, instant, and accessible is a winning recipe. Tapping into this behavior with AI-native 3D creation is a smart, timely move.
🎮 Niantic Spatial and Kojima Productions Join Forces on Real-World Storytelling
- The collaboration aims to bring Kojima’s narrative experiences into the real world using geospatial AI.
- Why this matters: Both teams have a proven track record in storytelling and bringing narratives into the physical world through AR. We’ll be watching closely to see what comes from this collaboration.
🌱 Earthmover Raises $7.2M to Power Physical AI With Scientific Data Infrastructure
- The company enables faster model training and simulation development across climate, weather, and other scientific domains.
- Why this matters: As physical AI systems scale, they need stronger infrastructure to manage the data behind simulations and digital twins. Earthmover is building that foundation.
🤖 Figure Launches Project Go-Big to Build World’s Largest Humanoid Training Dataset
- Project Go-Big captures large-scale, real-world human video to train Helix, Figure’s Vision-Language-Action model.
- Why this matters: Project Go-Big shows that Figure is serious about leading in humanoid robotics. Helix is their big bet on building robot intelligence through real-world data.
đź‘€ Be My Eyes Helped Shape Meta's New Wearables SDK
- Be My Eyes partnered with Meta as an alpha collaborator to co-develop the Wearables Device Access Toolkit.
- Why this matters: Wearables are beginning to fulfill a long-held promise in assistive technology, offering users a powerful way to access both human support and AI assistance. With the DAT soon to be in developers’ hands, more apps may soon follow Be My Eyes in unlocking the potential of AI glasses for accessibility.
🌀 Tom's Take
Unfiltered POV from the editor-in-chief.
Lately, my TikTok feed has been filled with videos of food delivery robots in distress and humans swooping in to save them. This role switch is the complete opposite of the robot story we are used to seeing in movies. In watching scenes of robots tipped over on sidewalks or taking the wrong turn and heading into traffic, what strikes me the most in these videos is not the failure of the tech, but the reaction of the people nearby. Almost every clip shows strangers rushing in to help. They lift the robot back onto its wheels, guide it across the street, or stop cars to let it pass.
It got me thinking about how quickly we can form attachments to machines. Is it because they have a lifelike design? Delivery bots with eyes, like those from Serve Robotics, or legs, like those from RIVR, can feel more like pets than vehicles or appliances. And their clear intent on completing a task gives the impression that the machine is somehow intelligent, even sentient. Add a touch of personality, like cute noises that react to your voice or actions, and our instinct to help is easily triggered.
These videos are not just about saving someone's order of burritos. It is about the social contracts we’re beginning to build with machines in public spaces. As robots become part of our daily lives, we are finding out that trust and care flow both ways. We expect them to work safely and reliably, and when they stumble, it turns out we’re surprisingly willing to lend a hand.
🔮 What’s Next
3 signals pointing to what’s coming next.
- Spatial video makes a comeback
We may be seeing spatial video make a comeback, reviving an immersive format that defined the early days of consumer VR. GoPro launched the MAX2, a rugged 360-degree camera that captures True 8K video, while Apple expanded its Vision Pro lineup with immersive films from CNN, BBC, Red Bull, and HYBE. With pro-grade cameras, AI-powered editing, and mainstream distribution, spatial video is finding new relevance as both hardware and content ecosystems evolve. - OLED reshapes immersive displays
OLED is evolving from a premium screen technology into the backbone of new immersive formats. Looking Glass’s new Hololuminescent Display adds a layer of 3D depth inside a standard LCD or OLED panel using patented optics, transforming flat video into holograms without the need for headsets or custom content. Meanwhile, Pimax’s Micro-OLED headsets use ultra-dense pixels, deep blacks, and lightweight optics to raise the bar for personal VR displays. Both of these devices show how this display technology is enabling virtual storytelling in public spaces and privately inside headsets. - Autonomous vehicles double down on safety
AV makers are prioritizing safety as a driver of adoption. Waymo’s latest report found 91% fewer serious crashes than human drivers, along with major reductions in pedestrian and cyclist injuries. In Tokyo, Nissan demoed its next-gen ProPILOT advanced driver assistance system. Featuring 11 cameras, 5 radar units, and advanced LiDAR with AI, the system is designed to spot hazards early and handle city traffic safely. The real milestone for AVs may not be autonomy but making the roads a safer place.
🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.
📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!