Google has begun to offer Gemini Live features for Pixel 9 and Galaxy S25 models, albeit late. At the end of February, these innovations, which were presented at the Mobile World Congress in Barcelona, started to reach users at the beginning of April, contrary to expectations. These features based on real -time video analysis and screen sharing allow the Gemini artificial intelligence system to work through multiple modes. In other words, users can no longer ask questions to Gemini not only in writing or sound, but also through a screenshot or camera.
The center of the new system includes the ability of artificial intelligence to produce response to any image seen on the phone screen or displayed with the camera. Pixel 9 or Galaxy S25 users can activate Gemini Live by pressing the power button long. The device then analyzes the surrounding images and offers instant feedback to the user. This experience promises a more holistic artificial intelligence aid than Google Assistant’s interaction model to date.
Users can arrange their wardrobes with Gemini Live and shape shopping preferences
With this system, for example, you can turn the camera in your closet and ask Gemini what to wear. Similarly, it is possible to learn laundry instructions or ask how to edit clothes. Gemini Live can detect such visual data and offer explanations for voice or written. It does not only direct the user, but also supports the decision process by comparing options according to the binding.
In addition, screen -sharing feature has remarkable functions. Users can take advice from Gemini by sharing their screens while visiting shopping sites. In other words, artificial intelligence makes personalized feedback according to the content on your screen. In this way, not only object recognition, but also contextual content evaluation becomes possible.
Although these developments prioritize Pixel 9 and Galaxy S25 users, Android users are not completely deprived of these innovations. It is possible to access these features through Gemini application. But there is a significant limitation here: Google One AI Premium subscription to achieve these functions. Users who pay $ 20 per month can experience these advanced systems without being connected to device models.
However, in spite of everything, Gemini Live’s experience can perform higher performance based on a device. Especially in the more powerful models, the real -time image processing process works faster and more stable. In older models, it is not yet clear how this system will perform. Therefore, the user experience may vary from the device to the device.
However, the fact that Google has presented these features to a limited number of devices means that wider masses cannot benefit from these innovations immediately. More Android models are expected to be supported in the coming months. User data and feedback obtained in this process may be decisive in the further development of the system. Google is expected to continue to integrate artificial intelligence investments into devices.