Samsung has become the pioneer in announcing a fresh mixed reality headset, dubbed “Project Moohan,” built on the freshly unveiled Android XR platform. Set to hit the market in 2025, we got a sneak peek at an early model.
During my demo session, Samsung and Google remained tight-lipped about key specs like resolution, weight, field-of-view, or pricing. However, they were firm on no photo or video captures, leaving us with just one official image at the moment.
If you’re picturing Project Moohan as a blend between the Quest and Vision Pro, you wouldn’t be far off. The device heavily borrows design elements from Vision Pro, including its color scheme and button layout, not to mention similar calibration steps. It’s clear this headset is paying attention to what’s already out there.
On the software frontier, if tasked to merge Horizon OS with VisionOS into something new, Android XR is the result you’d expect. The way Project Moohan and Android XR emulate the two significant platforms in the headset industry is quite remarkable.
This isn’t to imply a plagiarism case; rather, it highlights a common tech industry practice: taking inspiration from successful designs, improving them, and creating something better. If Android XR and Project Moohan manage to incorporate the best traits from others while avoiding their pitfalls, it’s a win for both developers and consumers. So far, they seem to capture many of the appealing aspects.
Upon examining the Project Moohan headset closely, it’s an impressive piece of equipment. It sports a goggle-like style akin to Vision Pro but varies in using a rigid strap with an adjustable dial, contrasting with Vision Pro’s softer strap, which many find uncomfortable. Its open-peripheral design makes it ideal for augmented reality applications, similar to Quest Pro, and even includes magnetic snap-on blinders for a more immersive experience.
Despite the similarities in design, Project Moohan lacks the Vision Pro’s external display, which shows the user’s eyes – a feature I find valuable for user interaction, though it’s often debated.
Though Samsung remains mum about technical details, it’s known the headset runs on a Snapdragon XR2+ Gen 2 processor, a souped-up version of the chip powering Quest 3 and Quest 3S. My interaction unveiled a few features: it uses pancake lenses with automatic IPD adjustment made possible by integrated eye-tracking systems. Its field-of-view didn’t initially seem as wide as that of Quest 3 or Vision Pro, but this could be altered using various forehead pad options that might allow the eyes to edge closer to the lenses.
The field-of-view in my test felt somewhat constrained but maintained a sense of immersion. Interestingly, the visibility diminished at the display’s periphery, likely due to brightness drop-off—a quirk that might change with lens proximity adjustments. However, lens-wise, it seems Quest 3 leads, followed by Vision Pro, leaving Project Moohan slightly trailing.
Samsung will have its own controllers for Project Moohan, but they’re still under wraps. It’s undecided whether they will be bundled with the headset or sold separately, so my test relied purely on hand and eye-tracking inputs—once again reminiscent of Horizon OS and VisionOS. The Samsung headset offers various input options, like raycast cursors or eye-plus-pinch controls, and boasts downward-facing cameras for comfortable hand gestures.
One eye-catching feature was the clarity of my hands’ image through the passthrough cameras—sharper than Quest 3 and with less blur than Vision Pro, albeit under perfect lighting. However, items in the distance seemed less defined, suggesting the cameras might focus at around arm’s length.
Exploring Android XR is an intriguing experience; it seamlessly combines elements from Horizon OS and VisionOS. The home screen resembles Vision Pro’s, with app icons on translucent backdrops. A glance and pinch brings up floating app panels—a gesture echoing Vision Pro’s.
The interface might echo Horizon OS in system aesthetics but offers a straightforward repositioning of windows by grasping an invisible frame. Beyond flat apps, Android XR handles fully immersive experiences. An immersive Google Maps was particularly engaging, reminiscent of Google Earth VR, providing 3D models, Street Views, and newly introduced volumetric captures of indoor areas.
Although not as sharp as photogrammetry scans, these captures are real-time renderings. Google notes that while capture sharpness is set to improve, they’re currently generated on-device, not streamed.
Google Photos shines through on Android XR with automatic conversion of 2D photos and videos into 3D, reminiscent of Vision Pro’s quality. The YouTube app is revamped to fully utilize Android XR, offering traditional flatscreen content alongside 180, 360, and 3D footage. Even 2D videos converted to 3D were impressively presented during my demo, although it’s unclear if this conversion is automatic or needs creator opt-in. More on this will likely emerge.
The standout feature of Android XR and Project Moohan, from both hardware and software perspectives, is undoubtedly the integration of conversational AI. Google’s Gemini, particularly the ‘Project Astra’ variant, debuting on the home screen, ensures a distinct competitive edge over current headset AI.
Unlike others, Gemini ‘sees’ what you see—both in reality and virtually, continuously—enabling it to be more conversational and integrated than its contemporaries. While Vision Pro’s Siri is limited to auditory inputs, and Meta’s experimental AI lacks virtual-world perception, Gemini smartly processes visual information, lending it an unprecedentedly fluid user experience without awkward pauses.
Gemini retains a rolling 10-minute memory, recalling past interactions to threads seamlessly. The demo showcased how Gemini could navigate spatial data, translate languages, and maintain coherent dialogues across different interactions—a functionality only recent AI innovations can handle.
Beyond answering general inquiries, Gemini exercises control over the headset. With immersive commands like “show me the Eiffel Tower” translating to a 3D experience in Google Maps, its potential for spatial interaction shines. Queries such as “how tall is it?” are naturally addressed, supported by real-time or stored virtual content.
Though Gemini on Android XR grasps the assistant essentials most smartphone AIs offer—texting, emailing, setting reminders—its depth with XR-specific functions remains to be seen. The current iteration of Gemini marks the pinnacle of AI engagement on a headset, surpassing competitors’ capabilities, but Apple and Meta are undoubtedly pursuing similar developments. How long Google can stay ahead is yet to be determined.
Eventually, its seamless integration suggests its potential as a central feature for everyday smartglasses—a direction I explored further, but that’s a conversation for another article.