š Remix Reality Insider: AI Is Running Out of Data
Your premium drop on the systems, machines, and forces reshaping reality.
š°ļø The Signal
This weekās defining shift.
AI is learning the world, not memorizing it.
A new generation of AI systems is moving beyond brute-force training and toward a structured understanding of space, time, and physical behavior. Instead of consuming endless data, these models are learning how the world works and how it changes.
Memorization supports passive intelligence. World modeling enables intelligence that can anticipate, adapt, and act. The difference is whether a system relies on static snapshots of the past or can observe, reason, and respond as conditions evolve.
This weekās news surfaced signals like these:
- Molmo 2 is a new open multimodal model from Ai2 that adds spatial and temporal reasoning across images and video. It tracks objects and events over time, treating scenes as ongoing situations rather than static frames.
- HYPRLABS is building an autonomy system that learns directly from real-world driving rather than relying on maps, labels, or heavy simulation. By learning through live experience and correction, it treats the environment as something to be understood through interaction.
- Google DeepMindās Veo world model uses video-based simulation to predict how robots will behave across tasks and environments before deployment. By generating and testing future scenarios, it evaluates actions and failure modes rather than past performance alone.
Why this matters: When AI starts to understand the world as it changes, it stops being limited to analysis and recommendation. It can begin to operate in real environments, where timing, context, and behavior matter. Thatās the line physical AI has to cross.
š§ Reality Decoded
Your premium deep dive.
This week, we welcome a new contributor to Remix Reality, Nathan Bowser, who is launching a series spotlighting the creators shaping the next wave of computing. The first piece focuses on spatial creator and futurist Piper ZY, exploring what actually moves spatial computing forward, and what continues to hold it back.
Three ideas from the conversation stand out.
- Spatial computing works when it starts with the body: Piper ZY is known for her AR-powered rings, nails, and fashion pieces, which anchor digital elements to her physical presence and personal style. What is key here is that the AR feels connected to identity, not devices. This deepens the intention and meaning of the technology rather than it being a novelty.
- Creators translate emerging tech into brand value: Brands tap creators like Piper ZY to help harness new tech like AR. Most brands were excited to use AR but needed help finding a way to make use of the technology with their IP in a meaningful way. Piper's role has been to push the tools, see where they break, and then translate that into something a brand and its audience can understand.
- Skepticism plays a role in progress: Even though she is working at the edge of innovation, Piper ZY remains cautious about new tech like generative AI. Her concerns arenāt about whether it works, but about who controls it and who benefits. Questions around ethics and integrity are showing up more often among creators when it comes to new technology, and they shape what tools get adopted and which ones donāt.
Key Takeaway:
Piper ZY approaches spatial computing as something that needs to be shaped, not shipped. Her work connects digital systems to the identity whether this is an individual or brand which gives it meaning all while remaining skeptical of which tools are worth using. That kind of mindset is what helps to define what this next wave looks like.
š” Weekly Radar
Your weekly scan across the spatial computing stack.
šØāš³ Chef Robotics Unveils Chef+ Built on 80 Million Production Servings
- Chef+ doubles ingredient capacity, reduces footprint, and enhances food safety for industrial kitchens.
- Why this matters: Built on feedback from real production use, Chef+ reflects a clear effort to address practical constraints in the field. The updated features aim to make the robot not just more capable but easier for customers to succeed with at scale.
š Wiskās Generation 6 eVTOL Completes First Autonomous Flight
- Wisk Aero's Generation 6 eVTOL completed its inaugural hover flight at the company's Hollister, CA test site.
- Why this matters: Wisk is flying toward a first in the U.S., with an autonomous air taxi now in the FAA certification process. Autonomy in aviation is starting to meet the realities of regulation.
š Serve Robotics Reaches 2,000-Robot Milestone, Scaling Nationās Largest Sidewalk Fleet
- Serve Robotics has deployed over 2,000 autonomous delivery robots, expanding its fleet twentyfold in 2025.
- Why this matters: Hitting 2,000 active units sets a new bar for whatās operationally possible with sidewalk delivery robots, which are becoming commonplace as a delivery method in the U.S.
š¤ 3,000 Reachy Minis Begin Global Rollout as Hugging Face Ships Early Kits
- Hugging Face CEO Clem Delangue announced that 3,000 Reachy Mini units are now shipping globally.
- Why this matters: The Reachy Mini rollout is a turning point for open, builder-driven robotics. A global wave of developers now has a physical platform to push AI experimentation into the real world.
š¶ļø RayNeo X3 Pro AR Glasses Launch Globally
- RayNeo has launched its flagship X3 Pro AR glasses globally on December 17, following their earlier debut in October of this year.
- Why this matters: True AR glasses with full-color, binocular displays are still rare in the consumer market. The category is slowly emerging, with products like Snapās next-gen Spectacles expected next year, and RayNeo X3 Pro available today.
š Plastic Waveguide Innovation Drives Development of Everyday AR Glasses in Japan
- Cellidās mass-produced plastic waveguides will power new AR glasses developed in collaboration with jig.jp and eyewear maker Boston Club.
- Why this matters: Combining Boston Club frames with Cellidās mass-produced plastic waveguides could be the hardware edge for this pair against a growing category of connected eyewear.
š§ Helm.ai Cuts Data Requirement for Urban Autonomy to Just 1,000 Hours
- Helm.aiās new AI system successfully navigated urban roads it had never seen before using minimal real-world training.
- Why this matters: Helmās semantic simulation shows that training in abstracted geometry beats brute-force data collection.
š¬ Moonlake Unveils Reverie, a Real-Time Generative Model Purpose-Built for Games
- Reverie is a diffusion model that runs inside games, generating content in real time without disrupting play.
- Why this matters: Reverie makes generative AI part of the game engine itself, live, reactive, and fully programmable through gameplay.
ā° Molmo 2 Unlocks Spatial and Temporal Understanding for Video and Images
- Molmo 2 introduces spatial and temporal reasoning across video, image, and multi-image inputs.
- Why this matters: Spatial and temporal understanding are critical for AI systems that operate in the real world. It enables tracking and reasoning over time, essential for safe, reliable performance in robotics, automation, and scientific work.
šŗļø Niantic Spatial and Vantor Unite to Build GPS-Free Positioning Network
- Niantic Spatial and Vantor are partnering to build a joint positioning system for air and ground platforms in GPS-denied environments.
- Why this matters: As reliance on GPS becomes a liability, this partnership points to a future where visual data and spatial models form the backbone of autonomous operations.
š Tom's Take
Unfiltered POV from the editor-in-chief.
I see 2026 shaping up as the year AI begins to hit the limits of the screen. Todayās systems know the world only through what we have documented, labeled, and stored. It lives behind glass. Sure, it is helpful, fast, and impressive, but it's fundamentally removed from the real world. It is constantly working on information from the past, not the present.
We haven't felt this constraint yet because the past is a powerful dataset. We have decades of text, images, video, and records stored on the internet that allow AI to predict, summarize, and recommend with surprising accuracy. But this advantage has a ceiling. As we ask AI to operate beyond stored knowledge, its blind spots become obvious. It depends on humans to observe the world, interpret it, and feed it back into machines. That dependency limits how far it can go, especially as we are not always complete with our documentation.
Physical AI changes that equation. When AI gains access to sensors like cameras, microphones, depth, and motion, it starts learning from the world as it unfolds. Multimodal systems give AI access to the world at the same time we have it, not after the fact through documentation. That makes it more useful, but also more grounded. It starts to understand space, cause and effect, and change by encountering them directly.
This is also how AI becomes embodied. Sensors connect perception to action, and action opens the door to autonomy and robotics. Machines then move from answering questions to help us do things to doing the physical work themselves. This shift isnāt an upgraded version of the computing we know today. Itās a change in how computing shows up at all. Once AI leaves the screen and starts operating in the physical world, thatās when the nature of computing really changes. 2026 is when this really starts to come together.
š® Whatās Next
3 signals pointing to whatās coming next.
- Hardware pioneers face a capital and control reset
Luminar, one of the early companies pushing LiDAR into autonomous systems, filed for Chapter 11 and is selling parts of the business, including its semiconductor unit, to keep operating. iRobot, a pioneer of consumer robotics, also entered Chapter 11 and is transferring ownership to its longtime manufacturing partner. These hardware pioneers, built in an era of public-market funding, are now reorganizing around new sources of capital and different owners. - Generative AI moves into real-time worlds
Generative AI is moving into live systems. Moonlake unveiled Reverie, a diffusion model that generates visuals while gameplay is happening, operating at frame time without stopping play. SpAItialās Echo is a world model that generates a single 3D environment, which can be explored and edited as it runs. These systems use generation inside the experience rather than around it. - Hands as a primary interface across physical and virtual systems
Hands are essential to how people and machines interact with spatial systems. Sharpa has started mass production of a high-precision robotic hand designed to handle a wide range of tasks through tactile sensing and fine motor control. Meta has updated hand tracking on Quest, improving speed, recovery, and realism. This makes it practical for developers to rely on hands instead of controllers for fast-paced experiences like fitness and games. Across robotics and XR, hands are moving from an add-on to the primary interface.
š Youāve unlocked this drop as a Remix Reality Insider. Thanks for helping us decode whatās next and why it matters.
š¬ Make sure you never miss an issue! If youāre using Gmail, drag this email into your Primary tab so Remix Reality doesnāt get lost in Promotions. On mobile, tap the three dots and hit āMove to > Primary.ā Thatās it!
š ļø This newsletter uses a human-led, AI-assisted workflow, with all final decisions made by editors.