What AWE USA 2025 Told Us About the State of XR

What AWE USA 2025 Told Us About the State of XR
Source: Midjourney - generated with AI

AWE 2025 brought another standout year, with over 400 speakers and an expo floor filled with the latest in augmented and virtual reality, spanning gaming, entertainment, marketing, and enterprise. From walking the floor to tuning into keynotes and side conversations, I noticed six consistent themes that clearly show where XR stands today and where it’s headed.

1️⃣ AI is XR’s Power-Up

Founder and CEO, Ori Inbar, kicked off the event in his annual keynote full of energy and enthusiasm. One of his big messages to the packed crowd was "AI ❤️ XR," highlighting the relationship between these two powerful technologies, which are shaping the next wave of computing. Inbar talked about how AI is making XR "better, faster, and cheaper." But spent a considerable amount of time talking about how "AI needs XR" to understand the world around it through multi-modal AI and perception systems like computer vision.

Perhaps the most striking comment Inbar made in his keynote this year was that "XR would harness AI and ride it all the way to the mainstream." This felt especially true in many of the major announcements that followed his presentation.

Google emphasized the major role Gemini is playing in AndroidXR, deeply embedding AI as part of its upcoming line of glasses to enable greater contextual awareness. Qualcomm demoed on-device AI running locally on AR glasses using its new Snapdragon AR1 platform, highlighting privacy, speed, and battery advantages. Meanwhile, Snap framed its new public launch of Specs as offering an AI-first experience with a built-in assistant powered by Snap's AI, along with deep integration with OpenAI and Google's Gemini.

But it wasn't just the big players who were leveraging AI. GenAI was being used to accelerate content development on developer platforms and create a new way to engage consumers in games and marketing activations.

The takeaway: XR is riding AI to reach the mainstream. Major players like Google, Qualcomm, and Snap reinforced the idea that AI isn’t just enhancing XR; it’s becoming essential to its next chapter.

2️⃣ Blurring Boundaries in Spatial Tech

The convergence of AI and XR is just the beginning. AWE 2025 clarified that we’re seeing a broader merging of spatial technologies under the banner of spatial computing. XR companies are now intersecting with AI, robotics, simulation, and machine perception.

This shift was underscored in Auki Labs’ keynote, Digital Beings in the Physical World, which featured a Unitree robot and highlighted partnerships using collaborative machine perception to help robots navigate retail spaces. Niantic Spatial also highlighted how its large geospatial AI models can give physical context to robots to understand better and take action in the environment.

AWE conversations weren’t limited to XR this year. They spanned the full spatial computing stack, touching everything from AI-powered wearables to robotic simulation and machine perception.

The takeaway: The boundaries between AR/VR, physical AI, and environmental simulation are blurring fast. What once felt like verticals are becoming shared infrastructure. The next era of spatial computing won’t be defined by devices alone, but by how well these systems work together.

3️⃣ The MR Headset Trifecta Is Finally Here

In partnership with Samsung, Google presented its AndroidXR platform, which reintroduced Project Moohan, its upcoming mixed reality headset, and provided private demos of the device at the event. According to Wccftech, the device is reported to be released on October 13, 2025, first in Korea, followed by global availability.

Google joins Meta and Apple to form a true headset trifecta, finally giving us the third leg of the stool necessary to consider mixed reality headsets a proper category. The AndroidXR-powered device injects new energy into the spatial computer category, which last got a big bump from the long-awaited debut of Apple with the Vision Pro. This is expected to not only attract new investments in the space but, more importantly, provide developers with an opportunity to scale their content, as they now have multiple channels to distribute it.

The arrival of the MR headset trifecta will likely boost consumer confidence and accelerate adoption. With each major player investing in their ecosystem, we can expect a surge in marketing, more polished user experiences, and a race to deliver the first breakout apps. Healthy competition should also drive collaboration on standards, laying the groundwork for mixed reality to mature as a mainstream category.

The takeaway: With all the major tech companies now investing in MR headsets, it is officially go time for spatial computers. The next few years will be critical in determining whether this category can become the next big thing for consumers.

4️⃣ Goodbye to the One-Glass Future

AWE 2025 confirmed what’s been quietly building: the dream of a single, do-everything pair of smartglasses is over. Qualcomm and Google laid out a future with multiple headworn wearable categories, not a single device.

It is clear that this next wave of computing will strongly resemble our present, with two major categories of personal devices, each serving different roles. The split is between mixed reality headsets and smartglasses. This mirrors how computing works today, with high-power tasks happening on desktops or laptops (soon to be spatial computers). At the same time, more portable experiences are performed on phones, or soon, smartglasses.

Google showcased options within the smartglasses category, including wired and wireless AR glasses, and a growing emphasis on AI glasses, lightweight, assistant-driven devices designed for daily use.

I've talked before about how AR glasses seem to be at a crossroads. Mixed reality headsets resemble the early days of PCs, with powerful but somewhat bulky devices anchored in a room at an office or home, maybe one per household if you can afford it. Meanwhile, smartglasses feel more like feature phones, not full portable computers, but can perform specific tasks very well, similar to AI glasses. Both categories will keep evolving, becoming more capable, wearable, and affordable until they reach milestones comparable to laptops and smartphones. And just as the smartphone today hasn't completely removed the necessity for a personal computer, we should expect our smartglasses to co-exist with spatial computers.

The takeaway: It is time to shed the notion that there will be one pair of glasses to rule them all. Instead, the focus must be on investing in the natural evolution of the two major categories already in play: PC to spatial computers and smartphone to smartglasses.

5️⃣ Developer Fatigue is Real

While the developer opportunity for XR is expanding, it is coming at a time when many have been working in this space for nearly a decade. In talking to developers at the event, there’s still excitement, but it’s tempered with exhaustion. Every new platform promises the future, but few have delivered a stable base to thrive. And those recently announced are just getting started.

Developers are being pulled into unproven ecosystems, with little clarity on adoption or ROI. Even as standards like OpenXR try to streamline the space, fragmentation remains high, and the talent pool is stretched across too many headsets, SDKs, and roadmaps. For many developers, the hardest part of staying in XR is doing it again.

Platform players must focus on carving out a clear path for developers to succeed beyond tools through funding and audience creation. They should also be sensitive to the overall demand developers face in XR and AI and tap the ecosystem when they are confident that the opportunity is right.

The takeaway: The XR developer community is eager but fatigued, having weathered years of fragmented platforms and unfulfilled promises. For XR to grow, platforms must go beyond tools, offering real support, funding, and clarity to justify another commitment round.

6️⃣ XR’s Capital Freeze

At AWE 2025, the underlying pain point wasn’t tech, it was funding. In talking to founders, indie developers, and agencies, there was one clear message: the tools are here, the platforms are multiplying, but the money is missing.

Initial funding for XR projects is harder to come by than ever, and even agencies struggle to find clients with budgets for immersive work, as much of the innovation budget is going to GenAI. Similarly, VCs are pouring resources into AI, not XR. Many are still building, but they do it without a clear runway and a path to sustainable funding.

This seemed to be felt more by those building for consumers than enterprise solutions. Many talks on stage showcased investment in XR in aerospace and defense, AEC, and manufacturing. In this sphere, the ROI is much clearer regarding improving operational efficiency, reducing errors and omissions, and maximizing training through simulation, which makes the business case for investing in XR much clearer.

The takeaway: AWE 2025 revealed that XR’s growth isn’t being stalled by technology but by a funding gap. As investment shifts to AI, XR struggles to secure the capital needed to realize its potential, especially on the consumer side.

AWE 2025 reflected a space in transition. AI is breathing new life into XR, and the spatial stack is finally starting to click, with clearer device categories, deeper platform investment, and greater technical convergence. But alongside that excitement came hard truths: developers are stretched thin, the dream of one-glass-fits-all is behind us, and funding is more challenging to secure than ever. The industry is maturing, but it’s also recalibrating. The next phase of XR won’t be about chasing the next big device; it’ll be about creating stability, clarity, and capital pathways that let this new ecosystem actually thrive.


Disclosure: Tom Emrich has previously worked with or holds interests in companies mentioned. His commentary is based solely on public information and reflects his personal views.