š¬ Remix Reality Insider: Robots Are Learning to Feel
Your weekly briefing on the systems, machines, and forces reshaping reality.
š°ļø The Signal
This weekās defining shift.
Robotics is moving beyond vision and into touch. Dexterity and physical intuition are becoming a new frontier for scalable Physical AI.
For years, robots have learned to see and plan. Now the bottleneck is manipulation, sensing force, adapting grip, handling soft or fragile objects, and operating in the last millimeter of the real world. This week, multiple signals pointed to a shift toward vision-language-action systems that integrate tactile intelligence and physical intuition as a core control layer.
Instead of treating physical intelligence as an edge case, companies are now designing models around it. Touch is becoming a first-class input, manipulation is becoming a primary benchmark, and VLAs are evolving from perception engines into full-stack control systems.
This weekās news surfaced signals like these:
- Sharpa introduced CraftNet, a VTLA system that combines vision, touch, language, and action to perform fine manipulation on real robots without scripting or simulation.
- Microsoft unveiled Rho-alpha, a VLA model that integrates tactile feedback so robots can follow natural instructions and adapt in real time as they handle physical tasks.
Why this matters: The next phase of Physical AI belongs to machines that can interact with the physical world, not just interpret it. Getting closer to human ability means developing touch, physical intuition, and real-world responsiveness.
š§ Reality Decoded
Your premium deep dive.
This week, we are taking a deeper look at Zipline, the worldās largest autonomous drone delivery service. On the heels of its $600M raise and expansion into Houston and Phoenix, Ziplineās decade-long evolution offers a window into what it takes to scale autonomous logistics in the real world. From medical supply delivery in Africa to instant home delivery in the U.S., it serves as a case study in turning robotics into durable infrastructure.
Three things you need to know about Zipline:
- Autonomous delivery has already proven it can scale: With reportedly more than 2 million commercial deliveries and 125 million autonomous miles flown, this is one of the few robotics operations running at real national scale.
- The real moat is logistics infrastructure, not hardware: The competitive edge comes from a full-stack system spanning airspace coordination, fulfillment, autonomy software, safety operations, and regulatory approval, not just the aircraft themselves.
- The strategy is shifting from healthcare to everyday consumer delivery: After earning trust through medical logistics, the focus is now expanding into food and retail, with restaurant partnerships positioning drones as a faster and cleaner alternative to car-based delivery.
Key Takeaway:
Zipline shows that robotics scales when autonomy becomes infrastructure, and infrastructure becomes a service people depend on daily.
š” Weekly Radar
Your weekly scan across the spatial computing stack.
āļø Louisiana and Persona AI Partner on Humanoid Robotics Pilot for Heavy Industry
- Louisiana and Persona AI have signed an agreement to launch a humanoid robotics pilot at SSE Steelās active fabrication facility.
- Why this matters: A formal state partnership to test humanoid robots in a live industrial setting is rare in this space. Louisiana is backing real-world deployment, not just research.
š“ Waymo Launches Driverless Ride-Hailing Service in Miami
- Waymo begins rolling out its fully autonomous rides to public users across a 60-square-mile area in Miami.
- Why this matters: Waymoās steady rollout into Miami shows the company is sticking to its playbook: expand city by city, keep it controlled, and prove autonomy can scale.
š Rokid Ai Glasses Style Now Shipping Globally Through Web and Amazon
- Rokidās Ai Glasses Style are now shipping globally via its official website, with Amazon availability in the U.S. and Germany.
- Why this matters: Another AI glasses option hits the market as the category heats up. With more players entering the space, Rokidās global push signals growing momentum and rising competition in wearable AI.
š” Avegant Launches Smaller, More Efficient Light Engine for AR Glasses
- New AG-30L3 model cuts size and weight in half while boosting resolution and lowering power use.
- Why this matters: These kinds of size, power, and clarity gains are what make all-day AR glasses feel less like prototypes and more like products.
š» Job Simulator Reaches 6 Million Installs as Owlchemy Celebrates 10 Years in VR
- Owlchemy Labs announced over 6 million installs of Job Simulator, marking its 10th anniversary.
- Why this matters: Owlchemy Labs announced over 6 million installs of Job Simulator, marking its 10th anniversary.
š Lynx R2 Launches as Open, Enterprise-Ready Mixed Reality Platform
- Lynx has debuted the R2 headset, aimed at professional use in healthcare, industry, and research, with availability details still to come.
- Why this matters: In fields like robotics, AI research, and surgical training, raw sensor access is a requirement. It enables users to build things like precision systems and validate complex models. Lynx offering full, synchronous access out of the box is a serious differentiator.
š Nucleus4D Raises $1.5M to Scale Spatial Capture Platform for Real-World Digitization
- Antler, South Loop Ventures, and sector-focused angels backed the pre-seed round.
- Why this matters: Nucleus4D isnāt just offering tours. It positions itself as spatial infrastructure, turning one capture into reusable data for both humans and machines.
ā Sharpaās New VTLA Model Targets the Hardest Problem in Robotics
- Sharpa introduced CraftNet, a VTLA model that combines vision, tactile sensing, language, and action for fine robotic manipulation.
- Why this matters: Dexterity is still the hardest problem in robotics, and CraftNet is Sharpaās attempt to solve it with a system built for real-world control.
š¦ SPIE and UNC Charlotte Launch $1M Doctoral Scholarship Fund in Optical Science
- The SPIE Emerging Innovators Scholarship will support two PhD students in UNC Charlotteās Optical Science and Engineering program.
- Why this matters: Endowments like this seed the next generation of specialists in optics and photonics, two of the most critical domains driving spatial computing forward.
š NBA Launchpad Reveals 2026 Tech Cohort Focused on Cognitive, Spatial, and Fan Engagement Tools
- Five startups were selected from over 200 global applicants for the NBA's fifth Launchpad program.
- Why this matters: The NBA is betting that the next wave of value isnāt just digital, itās spatial. From 3D game reconstructions to AI-powered court training, the league is turning the physical world into a data platform.
š Tom's Take
Unfiltered POV from the editor-in-chief.
This week I was at FOG Fair in San Francisco, the cityās largest design and art fair, where I attended a talk on the show floor. Like most expos, the conference area was set up with only half walls, which meant the roar of the fair was impossible to ignore while trying to focus on what was being said on stage.
It got me thinking about the role wearables play in augmenting our environment, and how useful it would have been to be wearing a pair of glasses or earbuds that could isolate the voices on stage while muting everything else around me.
Wearables, especially smartglasses, are often discussed in terms of what they can add to a space, such as augmented reality visuals layered onto the world. But they also have a subtractive side, often called diminished reality, where editing our environment means removing distractions rather than adding more information.
We are already starting to see this in audio. Devices like Apple's AirPods and the Meta Ray-Ban smartglasses are leaning into enhanced hearing, including features like Conversation Focus that help isolate voices in noisy environments. In these moments, wearables become assistive tools that make the environment more accessible to the wearer. They optimize reality so people can focus, hear better, and perform at their best.
This is a side of wearables that communicates value immediately. Not in a futuristic way, but in a practical one. Sometimes the most powerful upgrade is not adding more to reality, but removing what gets in the way.
š® Whatās Next
3 signals pointing to whatās coming next.
- Robotics companies are starting to share autonomy platforms across environments
Serve Roboticsā acquisition of Diligent Robotics shows its push to run robots indoors, such as in hospitals, as well as outdoors on sidewalks, as it's known today. Instead of building separate systems for each environment, Serve Robotics intends to reuse data, code, and operations to scale faster, moving towards platform consolidation over standalone robots. - 3D world generation is shifting into a developer tool, not a specialty skill
World Labsā new API lets developers generate explorable 3D environments from text, images, or video inside existing workflows. Instead of building worlds manually, teams can now treat spatial content as software that integrates into products and pipelines. - Robots are shifting into mobile sensing systems, not just moving machines
Boston Dynamicsā latest Spot updates add visual, thermal, and acoustic sensing for inspection work. The focus is moving from how robots move to how well they collect and combine data. As inspections scale, multimodal perception and repeatable data capture matter more than locomotion alone.
Know someone who should be following the signal? Send them to remixreality.com to sign up for our free weekly newsletter.
š¬ Make sure you never miss an issue! If youāre using Gmail, drag this email into your Primary tab so Remix Reality doesnāt get lost in Promotions. On mobile, tap the three dots and hit āMove to > Primary.ā Thatās it!
š ļø This newsletter uses a human-led, AI-assisted workflow, with all final decisions made by editors.