The Five Layer Framework of Spatial Computing
Spatial computing is more than a trend; it transforms how machines and humans interact with the world around us. We're witnessing a powerful convergence: intelligent machines, immersive interfaces, and the digital infrastructure required to fuse them into our reality.
To understand this moment, I’ve developed a framework that breaks spatial computing into five core layers. Each layer represents a building block in the remix of reality, where the physical and digital collide to create smarter, more interactive systems.
🧠 What Is Spatial Computing?
At its core, spatial computing is the ability for machines to understand, interpret, and navigate physical space and for humans to interact with that space through digital means. It’s about presence, context, and perception. The stack powers real-world applications of AI, robotics, AR/VR, digital twins, and more.
This framework outlines five key pillars of spatial computing:
- 🦾 Physical AI – Embodied agents that sense, decide, and act
- 🕶️ Immersive Interfaces – Devices that shape how we perceive and interact
- 🌍 Simulated Worlds – Digital spaces for training, testing, and modeling
- 👁️ Perception Systems – Systems that give machines the ability to sense and understand space
- 🛠️ Infrastructure – The backbone that powers it all

Downloadable content: Download this infographic by right-clicking and hitting save.
🦾 1. Physical AI
Embodied agents that sense, decide, and act
Physical AI refers to robots and autonomous systems that operate in the physical world. These are machines you can see and touch, increasingly, that can see and respond to you.
- Autonomous Mobility: Autonomous vehicles, drones, delivery bots
- Service Robotics: Assistive robots, companion bots
- Wearable Intelligence: Smart exosuits, spatially-aware wearables
Pre-scripted paths no longer limit these agents; they use real-time spatial awareness to adapt and respond.
Some examples of players in this category are
- Autonomous Mobility: Waymo, Zoox, Coco Robotics
- Service Robots: Humanoid Robots: Figure, 1X, Tesla
- Wearable Intelligence: Meta Ray-Ban, Rewind Pendant, ReWalk
🕶️ 2. Immersive Interfaces
Devices that shape how we perceive and interact with the world
Spatial computing also transforms how we interact with machines and environments, shifting from screens to presence.
- Visual: AR glasses, MR headsets, holographic displays
- Auditory: Spatial audio systems
- Tactile: Haptic feedback devices
Immersive interfaces are the new front ends, blending physical perception with digital information seamlessly and intuitively.
Some examples of players in this category are
- Visual: Meta Quest 3, Apple Vision Pro, Snap Spectacles, Looking Glass
- Auditory: Dolby Atmos, Sennheiser/Dear Reality,
- Tactile: Haptx, bHaptics, Teslasuit
🌍 3. Simulated Worlds
Digital spaces for training, testing, and modeling environments
Simulation powers both design and intelligence. It’s where we test reality before we deploy it.
- Environment Modeling: Digital twins, 3D reconstructions
- Simulation & Training: Physics-based simulations, synthetic environments
- Simulation Infrastructure: Game engines, data generation platforms
These digital layers are critical for training AI, designing spaces, and predicting outcomes in the real world.
Some examples of players in this category are
- Environment Modeling: NVIDIA Omniverse, Esri ArcGIS, Prevu3D
- Simulation & Training: Duality Robotics, Varjo, CAE Inc.
- Simulation Infrastructure: Unity, Unreal, Godot
👁️ 4. Perception Systems
Systems that give machines the ability to sense and understand space
The foundation of spatial computing is perception, the hardware and software that allow machines to observe and interpret physical environments.
Sensors:
- Visual: RGB, depth cameras, LiDAR
- Spatial: IMUs, UWB, GPS
- Audio & Tactile: Microphones, force, touch, pressure sensors
AI Systems:
- Spatial Understanding: SLAM, localization
- Scene Understanding: Semantic segmentation, computer vision
- Human Interaction: Gesture tracking, fusion
- Integration: Sensor fusion and contextual awareness
Perception is how machines build and learn to operate within a world map.
Some examples of players in this category are
- Sensors & AI Systems: Apple, Meta, Google, Microsoft, NVIDIA
🛠️ 5. Infrastructure
The backbone that powers it all
None of this works without a robust digital backbone. The infrastructure layer includes computing, connectivity, and governance tools that make spatial systems scalable and trustworthy.
- Compute: Edge, cloud, real-time systems (ROS, OpenXR)
- Connectivity: IoT, 5G
- Data: Spatial databases, semantic maps
- AI Infrastructure: ML pipelines, generative models
- Trust & Governance: Blockchain, privacy-preserving layers, decentralized identity
Some examples of players in this category are:
- Compute: NVIDIA, Amazon AWS, Latent AI
- Connectivity: Qualcomm, Pollen Mobile, Hologram
- Data: Google Maps Platform, Esri, Cartesia
- AI Infrastructure: OpenAI, Rendered.ai, Covariant
- Trust & Governance: Microsoft Entra, Oasis Labs, Spruce ID
Spatial computing isn’t a single technology. The convergence of systems helps machines make sense of space and gives humans new ways to interact with it.
This framework is a starting point. As the line between digital and physical continues to dissolve, we’ll need new mental models to understand what’s happening.
Whether you're building, investing, researching, or just curious, I hope this gives you a clearer view of where we’re headed.
Because we’re not just computing anymore. We’re computing in the world.