/pcq/media/media_files/2025/03/07/xY08GDGQglUt9sNkSxJu.png)
Samsung's first long-term reality headset, "Project Moohan," is said to be delayed and is now going to be launched in late 2025. Android Authority reports that Samsung suppliers are going to start mass production of the components of the headset next month.
In the past, it was rumored that Samsung could launch its Android XR platform-based headset together with the upcoming Galaxy Z-series foldable phones at the next Galaxy Unpacked event.
Samsung Project Moohan
Last week, Samsung revealed its Project Moohan headset at the Mobile World Congress in Barcelona. According to the company, the headset would use multimodal AI technology to enable more natural and conversation-like interaction.
Samsung did not reveal too many details regarding the headset's specifications, but the report by The Elec gives a glimpse of its potential features.
The Samsung Project Moohan headset will likely be powered by Qualcomm's Snapdragon XR2 Plus Gen 2 VR chip. Display-wise, it could have 1.3-inch OLEDoS (OLED on Silicon) panels with 4K resolution, which are being developed by Sony.
Also, the screen is said to have a pixel density of 3,800ppi, which is higher than the 3,391ppi screen on Apple's Vision Pro headset. Samsung's XR headset is also said to have pass-through capability, just like the Vision Pro.
Other Features on Android XR
The device will provide users with a virtual working space, which will allow access to apps like Google Maps for navigation, YouTube for streaming media, and Gemini AI for live support. Based on Android XR, it will be compatible with mobile and tablet apps from the Google Play Store.
Additionally, Samsung and Google have reportedly been busy developing native applications optimized for the platform, among them a full-fledged YouTube experience with support for a virtual screen and 3D images in Google Photos.
One of the most significant aspects of the Project Moohan headset is likely to be its support for Google Gemini. The AI assistant will provide voice and vision-based interfaces, device control, and context-aware task support, such as information lookup and guided processes.