🔓 Remix Reality Insider: XR's Gift of Early Insight

🔓 Remix Reality Insider: XR's Gift of Early Insight

Your premium drop on the systems, machines, and forces reshaping reality.

🛰️ The Signal

This week’s defining shift.

XR is a practical tool for reducing real-world risk.

It helps people see what they are dealing with before they commit to a choice or an action. Teams can spot problems before they happen, drivers can get comfortable with harder scenarios before hitting the road, and shoppers can get a better feel for fit and style before purchase. Better awareness at the start tends to pay off later.

This week’s news surfaced signals like these:

  • Mercedes-AMG PETRONAS F1 is using TeamViewer’s AR tools to speed up how its test rigs are put together. Engineers can point a tablet at the setup and see step-by-step guidance placed directly on the hardware. The overlays come from the team’s CAD files and help staff check part placement and confirm that everything is ready before testing starts.
  • Tom Ford Fashion has added an AR try-on feature for its eyewear on its online stores. The experience, powered by Perfect Corp., uses a person’s pupillary distance to show frames at the right size on their face. This gives shoppers a more accurate sense of how different styles will look and can help cut down on returns.
  • South Carolina State University opened a VR training lab for commercial drivers, using full-size simulators to prepare people for roadway hazards such as fatigue, congestion, and aggressive driving. The system also captures physiological data to support safety research and improve training design.

Why this matters: Tools that help people understand things earlier can lead to better outcomes. XR does this by making moments that used to feel uncertain easier to anticipate. As more organizations adopt it, the technology becomes a powerful way to bring more confidence into everyday decisions.


🧠 Reality Decoded

Your premium deep dive.

Physical AI investment is growing across government, enterprise, and the startup ecosystem. Public agencies are putting real funding behind autonomous solutions, large firms are building out platforms and labs dedicated to robotics and simulation, and early-stage programs are offering targeted support for new teams focused on this area. The interest is broad and coordinated, which shows that physical AI is being treated as a serious area for long-term investment.

When public agencies, large companies, and startup groups all put their attention on the same area, it changes the pace and shape of progress. The space gains clearer direction, more resources, and a wider mix of ideas. That kind of alignment creates the conditions for meaningful development and makes it easier for new technologies to find their place in the world.

  • Public-sector commitment: The U.S. State Department’s plan to provide up to $150 million for Zipline places autonomous delivery within global health infrastructure. It also shows interest in how American-built robotics can support public health and economic goals.
  • Enterprise adoption: EY’s new physical AI platform, lab, and leadership role reflect how large organizations are preparing for robotics and simulation. Their work around digital twins, synthetic data, and safety outlines how enterprises expect to engage with these systems.
  • Startup support: MassRobotics is now taking applications for its next Physical AI Fellowship, supported by AWS and NVIDIA. The program offers early teams compute resources, tools, and guidance for work in robotics and physical AI.
Key Takeaway:
Physical AI is becoming a long-term bet. The range of support behind it shows that many expect robotics and autonomy to play a practical role in important systems.

📡 Weekly Radar

Your weekly scan across the spatial computing stack.

PHYSICAL AI

🤓 Meta Opens Developer Preview for AI Glasses Toolkit

  • Developers can now prototype mobile app integrations using the camera, microphone, and audio features of Meta’s AI glasses.
  • Why this matters: This is the first meaningful opening for developers to extend what Meta’s AI glasses can do. With access to sensors, mobile apps can now see and hear from the user’s point of view, turning the glasses into a real-time interface for the world around them.

🚕 Waymo Expands Testing Operations Across Four New U.S. Cities

  • Manual driving begins in St. Louis, Pittsburgh, and Baltimore as Waymo prepares for future service.
  • Why this matters: Waymo’s rapid expansion shows what’s possible once a clear operational framework and proven technology are in place. With fully autonomous systems already active in major cities, the company is now scaling with precision across the U.S.

🚗 NVIDIA Releases Open Reasoning Model to Support Safer Autonomous Driving

  • DRIVE Alpamayo-R1 combines chain-of-thought reasoning with path planning to handle complex road scenarios.
  • Why this matters: For researchers, AR1 lowers the barrier to exploring how reasoning can improve real-world AV behavior to shape autonomous driving logic at scale.

🤖 Tutor Intelligence Raises $34M to Expand Robot Workforce and Intelligence Platform

  • Tutor Intelligence secured $34 million in Series A funding led by Union Square Ventures, bringing its total funding to $42 million.
  • Why this matters: Tutor Intelligence's subscription model makes it easier for companies to start using robots without high upfront costs. It also allows faster deployment and avoids the need for in-house maintenance or technical teams.
IMMERSIVE INTERFACES

👓 Cellid Unveils AR Glasses Reference Designs With Mass-Produced Plastic Waveguide

  • New reference designs pair in-house plastic and glass waveguides with compact AR form factors.
  • Why this matters: Plastic has big advantages over glass. It's lighter, cheaper, and more durable, but scaling it without losing image quality has been a challenge. Cellid's ability to mass-produce its plastic waveguide could be a real step toward making AR glasses more practical and scalable.

🏥 MediView Cleared to Bring Mixed Reality Imaging to European Hospitals

  • MediView has received CE mark certification, clearing its spatial computing platform for clinical use in Europe.
  • Why this matters: MediView’s CE mark puts spatial computing into real clinical workflows in Europe. It shows these tools can meet the same standards as other medical devices.
SIMULATED WORLDS

🌐 RP1 Opens Spatial Web Platform to Developers

  • Developers can now build and host their own 3D environments that link into RP1’s real-time, shared spatial network.
  • Why this matters: Most XR platforms make you build inside someone else's system. RP1 flips that, letting developers run their own servers, link into a shared network, and actually own what they build.

🌊 Fujitsu Deploys Digital Twin to Certify Blue Carbon in Coastal Ecosystems

  • Fujitsu's system combines drones, AI, and simulation to measure blue carbon faster and without expert personnel.
  • Why this matters: Fujitsu is turning a complex ecological measurement into a 30-minute, expert-free process with the use of digital twins. It makes routine tracking and restoration of coastal ecosystems practical at scale.
PERCEPTION SYSTEMS

🛠 RealSense and AVerMedia Debut All-in-One Vision Kit to Speed Robotics Development

  • The SenseEdge Development Kit combines RealSense depth cameras with NVIDIA Jetson-powered compute hardware in a pre-integrated system.
  • Why this matters: By packaging depth sensing, compute, and software into a single system, this kit removes the typical integration roadblocks. This allows developers to spend less time wiring hardware and more time building real applications.

🎥 ByteDance Releases Depth Anything 3, a Simpler Way to Reconstruct 3D Scenes from Images or Video

  • ByteDance Seed Team released Depth Anything 3, a vision model that reconstructs 3D scenes from single images, multi-view inputs, or video.
  • Why this matters: By releasing open-source tools, the team is turning the model into more than just a paper, letting developers put it into practice for robotics, 3D design, or VR projects with minimal setup.
SOCIETY & CULTURE

🛍️ Tom Ford Fashion Integrates AR Try-On Into E-Commerce Experience

  • Tom Ford Fashion is rolling out Perfect Corp.’s AR-powered eyewear try-on tool across its U.S., Canadian, and European online stores.
  • Why this matters: Tom Ford Fashion joins countless brands turning to AR to bring a tangible experience to e-commerce. This feature goes beyond hype. It will be measured directly through sales lift and reduced return rates.

🌀 Tom's Take

Unfiltered POV from the editor-in-chief.

Previous reports by The Financial Times that Apple may be preparing for an eventual leadership transition got me thinking about what the next CEO will actually need. Vision is the big one. We are entering the next wave of computing, one that moves technology from behind the screen into the real world. This is new territory for everyone, and it is not the time for legacy thinking. The next leader has to think outside the box, quite literally. At the same time, they need a sense of how fast to guide the market. Timing matters. As this shift is as significant as the birth of computing, people will need to be eased into this future, not overwhelmed by it.

There are a few areas that stand out as priorities for the next decade based on public market trends and Apple’s broader direction.

AI is a major one. Apple is well-positioned with its focus on design, privacy, and a powerful ecosystem to take what is happening with LLMs and make it more user-friendly. A better UX, local processing, and safety will matter. Apple also has an opportunity to personalize AI by tapping into the many touchpoints it already has with its users. The next step is enriching it with multimodal context from sensors like cameras and microphones in our homes, on our bodies, and in our pockets. That layer of context is where things get interesting and will be lucrative for Apple's own apps as well as its wider developer ecosystem.

XR is another big area. Vision Pro is a strong start with room to grow. The challenge with any mixed reality headset is finding value that justifies wearing something bulky. There is a clear opportunity in the enterprise right now, especially since the AR enterprise market has thinned out. On the consumer side, the killer apps still need to emerge. Vision Pro is already showing strength in entertainment. One immediate path could be reframing it as a next-generation Apple TV, a simpler entertainment device that helps people understand why they should buy it and then grow the use cases from there.

At the same time, Apple needs to prepare for our post-smartphone future, which is built around smartglasses. This begins with AI glasses. The AI glasses category today feels similar to smartwatches in 2015 when Apple introduced the Apple Watch, primed for its entry. AI glasses would be a strong iPhone companion, especially if they feature an easy handoff from frames to the phone. They would also give Apple Intelligence eyes and ears on the world with their onboard sensors. Once AI glasses prove their value, displays and full AR features can follow as the technology matures and consumers are more ready for them.

Further out, I can see Apple exploring robotics as humanoids in particular move from science fiction into early reality. Apple’s focus on sensors, on-device intelligence, and hardware-software integration gives it a strong foundation to step into this space.

The next decade will push technology further into the world around us. Any company working in this space will need leadership that understands the scale of this shift and can move toward it at the right pace.


🔮 What’s Next

3 signals pointing to what’s coming next.

  1. Close-range vision becomes a priority
    Innoviz is supplying short-range LiDAR for Daimler and Torc’s autonomous trucks, adding a sensor built for the tight maneuvers those vehicles face on the road. RealSense and AVerMedia’s new developer kit brings depth sensing and computing together for robots that work at the human scale. Both updates highlight growing attention on the close-range perception layer that autonomous systems rely on to move safely and handle detailed tasks.
  2. AI-enabled 3D tools plug into existing pipelines
    Tencent’s launch of Hunyuan 3D and ByteDance’s release of Depth Anything 3 highlight a change in how 3D pipelines are built and used. Both tools create files that drop into existing software and production flows, so teams can work with 3D content without rebuilding their process from scratch. This makes it easier to bring new 3D assets into design, simulation, and spatial apps.
  3. 3D product twins become central to modern commerce
    Tom Ford’s try-on feature depends on precise digital versions of its frames. Vyking is creating the systems that produce these models for entire product lines. These detailed digital twins are becoming central to how retail operates. Once a product is captured in 3D, the same file can support try-on, online viewing, marketing, and other channels without being rebuilt each time.

🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.

📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!

🛠️ This newsletter uses a human-led, AI-assisted workflow, with all final decisions made by editors.