Remix Reality Insider: Robots at Work Everywhere

Remix Reality Insider: Robots at Work Everywhere
Source: Midjourney - generated with AI

Your monthly briefing on the systems, machines, and forces reshaping reality.

🛰️ The Signal

One pattern worth watching.

Robots are moving into new kinds of work. Tasks that were once too variable, manual, or uneconomical to automate are now being handled across environments like security, food processing, and field services.

This month’s news surfaced signals like these:

  • Faraday Future’s FX Aegis quadruped cleared U.S. compliance testing and is moving into formal sales, designed for security, patrol, and companionship use cases that require mobility across complex environments.
  • Chef Robotics expanded its platform into meat tray assembly, applying robotic picking systems to one of the more difficult categories in food processing, where products vary in shape, size, and texture.
  • Lucid Bots is scaling autonomous exterior cleaning through a robotics-as-a-service model, with close to 1,000 robots deployed across commercial jobs ranging from building washing to pressure cleaning.

Why this matters: Robotics is showing up across more industries, not just a few controlled environments. The same advances in perception, manipulation, and autonomy are now being used in very different kinds of work. Adoption is starting to move faster across more categories.


đź§  Reality Decoded

A deeper look at what matters.

The next phase of spatial computing is not being defined by hardware. It is being shaped by creators who are turning these tools into new forms of expression, interaction, and identity.

Nathan Bowser's interview with Ines Alpha hits this point home. Alpha's work in “3D makeup” blends CGI and AR with the physical, creating something that sits between fashion, interface, and self-expression.

Three takeaways from her work:

  • Spatial computing is becoming participatory, not just visual: Alpha moved from CGI to augmented reality because she wanted people to actually try her work on. If it’s makeup, it should be wearable. That shift turns the viewer into the subject. Instead of just watching, people become part of the experience.
  • Digital and physical are starting to co-design each other: Her collaboration with brands like Prada goes beyond filters. She is helping translate digital textures into physical materials and back again, using 3D tools to prototype products that exist across both worlds.
  • Emotion is becoming a design layer: Projects like HyperEmotionalSkin, which respond to user emotion, point to a different kind of interface. Instead of static visuals, these systems react, adapt, and express. The goal is not realism. It is a transformation.
Key Takeaway:
Creators are defining what spatial computing is actually used for. Alpha’s work shows how it becomes something people wear, interact with, and use to express themselves.

📡 The Radar

Your monthly scan across the spatial computing stack.

PHYSICAL AI

🏭 Foundry Robotics Raises $19 Million to Build AI-First Robotics for Manufacturing

  • Foundry Robotics has raised $19 million in total seed funding to expand its robotics-driven manufacturing platform.
  • Why this matters: Investors are backing a manufacturing model built around software and robotics rather than traditional factory expansion alone. This could mark a broader shift toward AI-native manufacturing infrastructure in the United States.

đź§  Mind Robotics Raises $500M Series A to Scale AI Industrial Robotics Platform

  • Mind Robotics announced a $500 million Series A round co-led by Accel and Andreessen Horowitz, following a $115 million seed round in 2025.
  • Why this matters: Mind Robotics is focused on factory work that traditional automation cannot handle, tasks that vary and require judgment. By building the full system and developing it within a live production environment, it has a clearer path from testing to real-world use.

🤖 AGIBOT Hits 10,000 Robots as Production Speed Surges

  • AGIBOT announced its 10,000th humanoid unit, marking one of the earliest large-scale production milestones in the sector.
  • Why this matters: A jump from 5,000 to 10,000 units in three months, alongside active deployments, points to production and distribution operating at a different scale than earlier phases.

🇺🇸 FANUC Plans $90M Michigan Facility to Expand U.S. Robot Production Capacity

  • FANUC America announced a $90 million investment to acquire property and build an 840,000 square foot facility in Michigan.
  • Why this matters: This deepens FANUC’s long-term U.S. footprint while scaling the infrastructure needed to support more advanced, software-linked automation systems.

🤝 Fauna Robotics Joins Amazon While Continuing Product Sales And Support

  • Fauna Robotics announced it has joined Amazon and will operate as an Amazon company going forward.
  • Why this matters: This move feels like Amazon is getting closer to the humanoid robotics wave early, by bringing in a team as the category takes shape.
IMMERSIVE INTERFACES

đź‘“ Snap Bets on Qualcomm to Power the Next Wave of Specs Glasses

  • Snap’s Specs subsidiary signed a multi-year agreement with Qualcomm to use Snapdragon XR chips in upcoming generations of Specs.
  • Why this matters: Snap is making it clear that this is about growing Specs into a real product platform. A long-term Qualcomm partnership gives consumers more reliable performance, gives developers a stable foundation to build on, and moves AR glasses closer to becoming an everyday computing device.

🔌 Mojo Vision Lands $17.5M to Advance Micro-LED Optical I/O for AI Infrastructure

  • Mojo Vision raised $17.5 million from Future Ventures to accelerate its micro-LED platform for AI systems.
  • Why this matters: Micro-LED is showing up here as both a data center interconnect solution and a display system for AI glasses, pointing to how the same core technology can span infrastructure and interface layers.

🎮 Rec Room to Shut Down VR Social Platform After Decade Online

  • The VR-focused platform reported over 150 million players, with users forming more than 500 million friendships and logging a combined 68 thousand years of activity.
  • Why this matters: The VR market is clearly in flux, with platforms like Horizon Worlds also undergoing changes that reflect how hard it is to sustain large-scale social ecosystems. Rec Room’s shutdown underscores how even massive user engagement doesn’t guarantee a durable model in this category.
SIMULATED WORLDS

🌍 Niantic Advances Its Geospatial Platform for Real-World Mapping and Positioning

  • Niantic Spatial introduced a set of updates to its platform for capturing, mapping, and positioning in real-world environments.
  • Why this matters: Spatial maps become more useful when they can be continuously captured, updated, and localized across environments. Connecting these layers moves mapping closer to something machines can rely on in real-world conditions.

đź§Š Tripo AI Secures $50M to Advance Native 3D Generation Models

  • Tripo AI raised $50 million from Alibaba and Baidu Ventures to support research and expand its developer platform.
  • Why this matters: If 3D assets can be generated quickly and come out clean enough to use right away, teams don’t have to spend extra time fixing or rebuilding them before they go into a project.
PERCEPTION SYSTEMS

🤖 Google DeepMind Pushes Robots Toward Real-World Autonomy With Embodied Reasoning Upgrade

  • The model improves how robots interpret physical environments by strengthening spatial reasoning, multi-view understanding, and task planning.
  • Why this matters: The ability to point, count, and decide if a task is done, especially across messy, real-world inputs like gauges and multiple camera views, is what turns a system from reactive to autonomous.

đźš— Pony.ai Introduces Self-Improving World Model for Autonomous Driving

  • Pony.ai introduced PonyWorld 2.0, an upgraded world model designed to improve how its autonomous driving system is trained and scaled.
  • Why this matters: PonyWorld 2.0 shifts part of the iteration cycle from human-led to system-guided, which directly addresses a core bottleneck in scaling autonomous driving: knowing what to train next, and doing it efficiently at fleet scale.

đź“· AGIBOT Pushes World Models Into Interactive Simulation With Genie Envisioner 2.0

  • AGIBOT updated Genie Envisioner 2.0 to turn its world models into interactive simulators where robots can act and learn.
  • Why this matters: This update turns world models into places where robots can practice, not just predict what might happen. Instead of needing more real-world trials, progress depends on how accurate and responsive the simulated environment is.

🌀 Tom's Take

Unfiltered POV from the editor-in-chief.

Smartglasses are starting to run into the real world.

Meta’s Ray-Ban glasses have been gaining traction, showing up everywhere from concerts to college campuses to everyday city streets. People are using them to capture moments hands-free, stream live experiences, and experiment with what it feels like to wear a camera rather than hold one. At the same time, some venues and events are beginning to push back, with restrictions around recording and growing conversations about when and where these devices should be used.

This tension is not new. It is what happens when a new category moves from novelty into daily life. Back in the late 2000s, early camera phones created similar friction. Gyms, restaurants, and clubs began putting rules in place, not because cameras were new, but because they were suddenly everywhere. The technology outpaced the social norms around it. There was a gap between what was possible and what felt acceptable.

Smartglasses are creating that same gap again. But this time the difference is subtle. With a phone, capturing media is obvious. You see someone take it out and start recording. With glasses, it’s harder to tell. It blends into normal behavior. That shifts the social contract, not by changing what people can do, but by changing how visible it is when they do it.

What’s emerging now is the early formation of new etiquette. Where are these devices appropriate? When consent is expected? How do people signal that a recording is happening? These rules don’t come from product design alone. They get negotiated in public, through usage, friction, and pushback.

This is often what happens right before a category becomes standard. When a device starts to create tension in everyday environments, it usually means it’s no longer confined to early adopters. It is entering shared spaces. People react not because it is unfamiliar, but because it is becoming common enough to matter.

The backlash can look like resistance. In many cases, it’s a sign the product is reaching more people. What follows is a period where norms catch up, and people decide how it actually gets used.


đź§µ The Throughline

Three emerging themes shaping what's next.

  1. Smart glasses are being optimized for continuous use
    Snap is working with Qualcomm to enable more efficient on-device processing for AI and AR experiences, while Meta is expanding into prescription-ready frames with improved fit and comfort. Both of these moves focus on removing the practical barriers to wearing smartglasses throughout the day, shifting the category from something used occasionally to something designed to stay on
  2. Robotaxis are being assembled, not built
    In Los Angeles, MOIA and Uber are working together to bring autonomous vehicles onto the road, combining the vehicle, the driving system, and the ride platform. In Europe, Pony.ai, Uber, and Verne are doing something similar, each handling a different part of how the service actually runs. Instead of one company owning the full stack, robotaxis are emerging as coordinated systems built through partnerships, making the model easier to scale and replicate across cities.
  3. Autonomous systems are starting to guide their own improvement
    Google DeepMind is improving how robots understand what they’re looking at and whether a task is actually finished, while Pony.ai is building systems that can spot their own weak points and ask for better data. Instead of relying only on engineers to guide every step, more of that process is starting to happen inside the system itself.

Know someone who should be following the signal? Send them to remixreality.com to sign up for our free newsletter.

📬 Make sure you never miss an issue! If you’re using Gmail, drag this email into your Primary tab so Remix Reality doesn’t get lost in Promotions. On mobile, tap the three dots and hit “Move to > Primary.” That’s it!

🛠️ This newsletter uses a human-led, AI-assisted workflow, with all final decisions made by editors.