🔓 Remix Reality Insider: The New Interfaces of Insight

🔓 Remix Reality Insider: The New Interfaces of Insight
Source: Midjourney - generated by AI

Your premium drop on the systems, machines, and forces reshaping reality.

🛰️ The Signal

This week’s defining shift.

3D is no longer just for gaming or immersive experiences. It’s being used to generate content for today’s devices, including phones, laptops, and flat screens, as well as to train the systems that will power tomorrow’s devices, from autonomous robots to AR glasses. What started as visual content is now becoming infrastructure for media production, simulation, and machine learning.

This week’s spatial computing news surfaced signals like these:

  • Intangible is flipping spatial creation inside out, using 3D scenes to generate 2D media, accelerating creative workflows in film, events, and advertising.
  • Applied Intuition’s acquisition of Reblika signals how digital humans are becoming critical infrastructure for autonomous system testing.
  • Voxelo.ai is turning everyday product videos into interactive 3D and AR content for e-commerce.

Why this matters: 3D is moving beyond visual output and becoming a creative and technical starting point. It’s shaping the way we make media, train intelligent systems, and model the world.


🧠 Reality Decoded

Your premium deep dive.

What happens when you wear a camera all day, every day? A decade ago, I tried it. And with OpenAI now rumored to be developing a small, always-on device that sees the world around you, that experience feels newly relevant.

OpenAI’s $6.5 billion acquisition of LoveFrom, the design firm led by Jony Ive, is reportedly fueling a new category: a spatially-aware third core device that works alongside your laptop and smartphone. Many imagine it as a discreet wearable camera, designed to feed real-time context to AI.

I wore the Narrative Clip, a tiny lifelogging camera that automatically took a photo every 30 seconds. Launched in 2014, it was designed to passively capture your day without screens or buttons, just clip it on and let it record your life in snapshots.

Here are a few of the learnings I gained from wearing a camera every day:

  • Habit is everything. For a wearable to work, it has to become part of your daily routine. The payoff, like seeing your life through new eyes, needs to outweigh the friction of remembering to wear and charge it.
  • Design is both social and functional. Where and how you wear a device shapes how people react and what it captures. When I wore the Narrative Clip on my chest, even alongside Google Glass, most people ignored it. But Glass on my face drew instant attention. Placement didn’t just affect comfort. It changed what the camera could actually see.
  • The data is the point. The real value wasn’t in the photos themselves but in what they revealed over time. It revealed patterns I didn’t know I had such as how often I was on my phone, who I spent time with or didn’t. A device that sees your world can surface insights you’d never think to track and help both you and AI better understand your life.

If OpenAI is building a context-aware companion, they’ll need to get more than the hardware right. It has to be something people want to wear or use on a daily basis, something that feels natural in everyday life. If they get it right, AI won’t just see the world more clearly, it will understand us better and help enrich the way we live.

Key Takeaway:
If OpenAI’s device truly helps AI see the world the way we do, it could reshape how machines understand us. The real breakthrough won’t be in hardware specs or camera quality, but in making perception ambient, wearable, and worth it. Context could become the new interface and presence, the new input.

📡 Weekly Radar

Your weekly scan across the spatial computing stack.

PHYSICAL AI

🚗 Waymo Report Shows 88% Fewer Serious Crashes Than Human Drivers

  • New data show that Waymo’s autonomous vehicles were involved in 88% fewer crashes resulting in serious injury or worse compared to human drivers in the same locations.
  • Why this matters: As autonomous vehicles are still relatively new to many, publishing data around safety is a savvy step by Waymo to increase transparency and trust with passengers.

🕶️ Meta Reportedly Invests $3.5 Billion in EssilorLuxottica Amid Smartglasses Push

  • Meta has reportedly taken a minority stake of under 3% in EssilorLuxottica, with the investment valued at around €3 billion, or $3.5 billion according to a report by Bloomberg.
  • Why this matters: The race for the face is heating up. Smartglasses are emerging as the next mainstream device, with tech giants betting big on a post-smartphone future anchored to our faces.
IMMERSIVE INTERFACES

👁️ CREAL Secures $8.9M From ZEISS to Scale Light Field Displays for AR and Vision Care

  • CREAL's $8.9 million funding round was led by ZEISS and will support the miniaturization of products and their integration into ZEISS diagnostic tools and AR glasses.
  • Why this matters: Light field displays align digital content with natural human vision, paving the way for more comfortable, lifelike AR and signaling deeper investment from optics leaders like ZEISS.

🔥 Nokia Shares Progress on Conductivity-Based Thermal Haptics for XR Touch

  • Nokia detailed its latest research into heat-based touch systems that let users feel temperature differences, like hot and cold, in virtual environments.
  • Why this matters: Adding temperature to touch brings XR closer to true sensory realism, moving immersion beyond sight and sound into how we physically feel the virtual world.
SIMULATED WORLDS

🧑‍💻 Intangible Launches Open Beta for Spatial AI Creation Platform for Visual Storytelling

  • Intangible’s browser-based 3D tool is now in open beta, giving creative teams a faster way to build visual ideas across film, events, advertising, and games.
  • Why this matters: By using 3D tools to generate flat media, Intangible bridges spatial creation with today’s dominant formats, making immersive tech immediately useful and widely accessible.

👤 Applied Intuition Acquires Reblika to Simulate Human-Vehicle Interaction

  • Applied Intuition has acquired Reblika’s technology to create highly detailed, animated digital humans for simulation.
  • Why this matters: 3D content is evolving into essential infrastructure, powering everything from digital humans to autonomous systems beyond the confines of immersive media.
PERCEPTION SYSTEMS

🧠 Formant Platform Brings Generative AI to Robotics Operations

  • Formant's F3 platform uses natural language and agentic reasoning to control robots, surface insights, and automate decisions.
  • Why this matters: Foundational models are giving robots the ability to reason in real time, marking the start of a new era where machines can understand and interact with the physical world.
SOCIETY & CULTURE

🎭 AR App Translates Indigenous Stories Into Location-Based Experiences

  • A research team at the University of Sydney has created an AR experience that presents Indigenous narratives using sound and images tied to real-world sites.
  • Why this matters: Augmented reality lets us step into the past, turning locations into living portals for history, culture, and human connection.

🌀 Tom's Take

Unfiltered POV from the editor-in-chief.

One day, we’ll look back at how we use the word immersive today for XR, and it’ll sound adorably outdated.

Just like we once called CD-ROMs “interactive” or referred to 28.8 modems as “high-speed,” our current definition of immersion in XR feels advanced for its time, but it’s far from fully realized.

Right now, immersion mostly means sight and sound. High-fidelity displays, spatial audio, and six degrees of freedom let us enter digital content in ways we never could before. But we’re still only scratching the surface. We haven’t yet brought all of our senses into the experience, at least not in a way that mirrors how we engage with the physical world.

Nokia’s latest research into thermal haptics, letting you feel heat and cold in XR, is a glimpse of where things are headed. Seeing something isn’t the same as feeling it. That’s where things get real. When your body can tell the difference between virtual wood and virtual metal based on how each absorbs heat, that’s a whole new layer of fidelity. And it doesn't stop at touch. Scent and taste are also areas of research that feel experimental now, but will eventually reach commercialization.

We’ll truly earn the right to use the word immersive when XR engages all of our senses. For now, it works. But when we can no longer tell the difference between the virtual world and the real one, that’s when we’ll know we’ve arrived.


🔮 What’s Next

3 signals pointing to what’s coming next.

  1. Robots are the new Raspberry Pi
    Robots are becoming the most intuitive way to learn robotics itself. With Reachy Mini, Hugging Face is putting physical AI into the hands of students, makers, and educators. Open-source robotics invites tinkering, remixing, and learning through play. Tools like this are developing the next generation of technologists who will grow up understanding how machines perceive, interact with, and navigate the world.
  2. Displays are evolving for the human eye
    As smartglasses inch closer to the mainstream, a new wave of display technology is quietly reshaping what comes next. These systems promise more than just screens. They aim to deliver heads-up experiences that mirror how we see the real world. CREAL’s light field technology and XPANCEO’s smart contact lenses are examples of this, using optics that adapt to our eyes rather than the other way around. These displays point to a future where wearables don’t just sit on your face, they become part of how we’re wired to see.
  3. XR and AI are reshaping the operating room
    From Rush University’s AI-assisted Vision Pro colonoscopy trials to Cobonoix's autonomous medical robotics, spatial computing is taking hold in healthcare. These platforms are fundamentally changing how physicians do their job. XR helps them stay focused and hands-on. AI surfaces critical insights and assists in pattern recognition and diagnosis. While robotics allows for remote procedures. The result is care that’s more precise, personalized, and proactive.

🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.

📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!