🔓 Remix Reality Insider: Meta’s Post-Smartphone Future

🔓 Remix Reality Insider: Meta’s Post-Smartphone Future
Source: Midjourney - generated by AI

Your premium drop on the systems, machines, and forces reshaping reality.

🛰️ The Signal

This week’s defining shift.

Machines are learning from the real world and from the ones we simulate.

To act in the physical world, machines need more than one kind of data. Companies are fusing real-world capture with physics simulations and digital twins to give AI the breadth it needs to generalize.

This week’s spatial computing news surfaced signals like these:

  • Figure + Brookfield are capturing video across homes, offices, and logistics hubs, giving humanoid robots the lived human-environment data they need to navigate daily spaces.
  • Luminary Cloud is generating synthetic datasets with physics-based simulations, letting AI test how cars, aircraft, and other products perform under conditions that are too costly or risky to recreate in real life.
  • PassiveLogic is combining digital twins with generative AI and real-time physics to turn entire buildings into adaptive training grounds for autonomous control systems.

Why this matters: For embodied AI, the distinction between real and synthetic data is starting to matter less than the combination of both. By blending lived environments with simulated ones, AI can train faster, adapt better, and operate more reliably in the unpredictable conditions of the real world.


🧠 Reality Decoded

Your premium deep dive.

Meta used Connect 2025 to double down on its vision of a post-smartphone future. The flagship event focused on three things: glasses, AI, and VR. Here are some key highlights.

1. Glasses are Meta’s next platform
Meta sees glasses as the hardware to carry AI into daily life. This was clear in the only new hardware announced his year, which was smartglasses. Meta debuted Ray-Ban Display, its first pair with a built-in screen, bundled with a neural wristband for EMG control. Ray-Ban Meta (Gen 2) glasses added sharper video and smarter audio, while Oakley Meta Vanguard targeted athletes with fitness integrations with Strava and Garmin. The extension of this category frames glasses as a major pillar in Meta’s strategy for the post-smartphone era.

2. AI is accelerating VR creation
Meta’s vision is to make building in VR as simple as giving a prompt, while still keeping room for complex, high-quality worlds. Horizon Studio now supports tools that can generate meshes, textures, and scripts from text, with agentic AI linking them together coming soon. Horizon Engine, a new in-house game engine, raises the ceiling for worlds with faster performance and environments that can handle even more people. Meta also announced its MCP coding assistant, which connects Horizon OS tools to large language models to help developers code and prototype more quickly. These updates aim to lower the barrier to entry for new creators while pushing the limits of what VR content can be.

3. Media is moving to the center
Meta is making media a centerpiece of Horizon OS. Its newly announced Horizon TV is a new hub for streaming services. This includes Prime Video, Peacock, and Twitch, and will now also feature Disney+, Hulu, ESPN, and a new app from Blumhouse. 2D developers will soon be able to bring higher-fidelity video, 3D content, and panoramic experiences into Quest apps. Meta is signaling that compelling media is the key to turning its ecosystem into something people use every day.

Key Takeaway:
Meta is staking its claim on the next era of computing. By centering on glasses, infusing AI into creation, and putting media at the core, it’s working to turn its early lead in spatial computing into long-term dominance.

📡 Weekly Radar

Your weekly scan across the spatial computing stack.

PHYSICAL AI

🤖 OpenMind Unveils OM1 Beta, Calls It Robotics’ “Android Moment”

  • OpenMind has released OM1 Beta, a universal, open-source operating system that enables robots to perceive, reason, and act autonomously.
  • Why this matters: Open-source is what makes developer empowerment possible. OM1 gives builders the freedom to create without waiting for permission with a shared toolkit for turning ideas into intelligent machines.

🍔 Circus Rolls Out Autonomous Food Robot at Meta’s Munich Office

  • Circus SE has delivered its CA-1 robot to Meta, marking the first real-world use of its autonomous meal system.
  • Why this matters: This marks a major step forward for autonomous food prep, and a defining moment for Circus as it moves from concept to commercial use.
IMMERSIVE INTERFACES

🚗 Qualcomm and HARMAN Team Up on AI-Enabled Cockpit Solutions

  • Qualcomm’s Snapdragon Cockpit Elite will power HARMAN’s Ready lineup for AI-driven in-car experiences.
  • Why this matters: Qualcomm has been pushing Snapdragon deeper into cockpits and ADAS for years. This deal with HARMAN reinforces that play with an AI upgrade.

😎 Spectacles Gets WebXR-enabled Browser, New Spotlight and Gallery Lenses

  • Snap OS, the operating system for Spectacles, has been updated to Snap OS 2.0, bringing updates to the browser and new lenses to discover content.
  • Why this matters: These OS updates make it easier to find, access, and enjoy content, which means users are more likely to stick with the experience. With consumer glasses expected next year, getting this foundation right is critical to building lasting engagement.

👓 visionOS 26 Now Available With Full Rollout of New Spatial Features

  • Apple has officially released visionOS 26 for Vision Pro, delivering new ways to interact with apps, media, and people in spatial environments.
  • Why this matters: This update brings significant spatial features that deepen the Vision Pro experience, especially in web interaction and real-time shared environments.
SIMULATED WORLDS

💰 Over 300 VR Apps on Meta Horizon Have Topped $1M, With 10 Surpassing $50M

  • Meta revealed strong developer momentum at Connect 2025, and new tools for monetization, mixed reality, and spatial development for Horizon OS.
  • Why this matters: Meta is supporting the developer ecosystem from all angles, inspiring success, enabling monetization, and accelerating the creation of more powerful VR experiences.
PERCEPTION SYSTEMS

👀 ABB Invests in LandingAI to Accelerate Vision AI for Autonomous Robotics

  • ABB will integrate LandingAI’s vision AI tools to enhance robot training and deployment.
  • Why this matters: Vision AI makes robots useful for real jobs. LandingAI’s tools cut the time it takes to set that up, so more companies can actually use it.

📷 RoboSense Scales Global Supply of Digital LiDAR for Intelligent Vehicles

  • RoboSense has entered mass production of EM, its full digital LiDAR platform for driver assistance and autonomous systems.
  • Why this matters: Customizable resolution means automakers can spec exactly what each model needs, from ADAS to autonomy. This ensures the sensing performance is aligned with the vehicle design instead of a one-size-fits-all approach.
SOCIETY & CULTURE

🎮 Virtual Boy Returns with 3D Games and Accessories for Nintendo Switch

  • Nintendo will launch Virtual Boy titles on Switch Online + Expansion Pack in the U.S. and Canada on February 17, 2026.
  • Why this matters: The Virtual Boy was ahead of its time when it launched in 1995 with stereoscopic 3D, but its red-only display and awkward design led to poor sales and a short lifespan. Nintendo’s decision to revive it as an optional, low-cost Switch accessory is a clever nod to nostalgia. It won’t move the needle on VR, but it’s a fun way to reuse hardware people already own.

🌀 Tom's Take

Unfiltered POV from the editor-in-chief.

As someone who wore Google Glass daily and later Focals by North, I’ve had extensive hands-on experience with wearing smartglasses with a display. The dream of a hands-free, heads-up digital experience is one I still believe in. But in my experience, two obstacles still stand in the way of mainstream adoption.

The first is comfort. Eye strain and eye fatigue from smartglasses are a real thing, especially in the first few weeks of wearing them. This is especially a problem with monocular displays, which make the brain juggle input from one eye against the usual balance of both, a kind of constant mental gymnastics. I remember feeling like I was going cross-eyed trying to play games on Google Glass and getting a headache shortly after. Add factors like screen brightness, focal distance, and display alignment, and the effect can be disorienting. It gets better over time, almost like building a new muscle, but this feeling can be off-putting for those who experience this right out of the box and are not so committed to an adjustment period.

The second is context. On paper, being able to read messages or emails in your line of sight sounds great. In practice, it quickly becomes overwhelming. When every notification pops into your view, it just becomes eye noise. Notifications are already a constant distraction on our phones and watches today, bringing them up to your eyes makes it even harder to stay focused on the world around you. Ironically, this is completely opposite to the goal of glasses, which are to keep you more present in the moment. I remember having to turn my glasses off many times just to hold an everyday conversation because text messages were literally obscuring the person in front of me. This is where AI has to step in to add context. Displays on glasses need to be context-aware, surfacing information only when it’s truly helpful. Done right, they can be a companion. Done poorly, they become an intruder.


🔮 What’s Next

3 signals pointing to what’s coming next.

  1. Waymo Expands Robotaxi Service
    Waymo is quickly building a national network for driverless rides. The company is launching in Nashville through a partnership with Lyft, marking its first deployment in Tennessee and sixth U.S. city overall. San Francisco also approved a phased pilot at SFO, giving Waymo the green light to connect its service directly to airport travelers. These moves highlight how Waymo is scaling to become an integral part of our everyday travel experience.
  2. Investors Back Foundation Models for Robots
    Capital is flowing into the core AI systems that could make robots general-purpose. Figure closed a Series C round exceeding $1 billion at a $39 billion valuation to scale its Helix model and speed up humanoid deployments. DYNA Robotics raised $120 million in Series A funding, led by NVIDIA and other investors, to scale its infrastructure and accelerate iteration on its DYNA-1 system. These raises show how investors see foundation models as critical to unlocking robots that can adapt across environments and tasks.
  3. Location-Based VR Scales Up
    Immersive venues are expanding with fresh capital and new content. Sandbox VR is entering Italy with a €60 million plan to open up to 40 locations nationwide, starting with Treviso this December. Zero Latency VR is launching HAUNTED, a full-scale horror experience that adds seasonal variety to its 150 global sites. Location-based VR is extending its geographic reach to attract new customers and drive repeat visits with fresh experiences.

🔓 You’ve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode what’s next and why it matters.

📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!

Disclosure: Tom Emrich has previously worked with or holds interests in companies mentioned. His commentary is based solely on public information and reflects his personal views.