Skip to content

Smartphone AI Illumination: Google's Hidden App Unveils Optimal AI Performance on Mobile Devices

Experimental smartphone application, Google AI Edge Gallery, enables running multiple AI models autonomously, offline, and offering respectable speed.

Mobile AI application hidden within Google services shed light on optimized AI performance on...
Mobile AI application hidden within Google services shed light on optimized AI performance on smartphones

Smartphone AI Illumination: Google's Hidden App Unveils Optimal AI Performance on Mobile Devices

The future of Artificial Intelligence (AI) on smartphones is heading towards on-device processing, with the aim of making as many AI processes local as possible. This shift towards on-device AI promises a more secure and efficient way of handling AI tasks, keeping personal data on the device itself instead of being processed on remote servers.

One of the versatile models leading this change is the Gemma 3n, capable of processing images, generating audio, and engaging in chats. To make the most of this model, a smartphone should have a powerful Neural Processing Unit (NPU) or AI accelerator chip and preferably 8GB or more RAM.

Google has introduced an app called Google AI Edge Gallery, available on the Play Store for developers to build AI experiences within their apps. This app is a one-stop shop for running AI experiences, free of charge, and without any internet connection requirement.

AI models compatible with Google AI Edge Gallery can be downloaded from the HuggingFace LiteRT Community library. However, running AI models that are not available in the LiteRT gallery requires some technical knowledge.

The Google AI Edge Gallery allows users to run AI models offline, making it possible to make sense of images, summarize reports, and more without an internet connection. This approach allows tasks like proofreading, research, image editing, and camera explanations to be completed without an internet connection.

The 'Google AI app' refers primarily to Google's new AI platform called Gemini, which replaces the traditional Google Assistant on Android devices and Google Home/Nest devices starting October 2025. Gemini enhances language understanding, supports complex tasks, provides personalized recommendations, and integrates with Google Workspace apps for calendar, tasks, and notes management, making it a comprehensive AI assistant across devices.

Not all models work well with GPU acceleration, and some may require running on the CPU. For instance, the Gemma 3n-E2B model is the fastest at transcribing audio clips and produces high-quality responses, but it may not perform optimally with GPU acceleration. On the other hand, the Qwen 2.5 model takes a longer time to summarize text compared to the Phi-4 mini, but the formatting of the output is preferred by some users.

It's worth noting that the response time of AI models can vary significantly. For example, Gemini completes an image identification task in 3 seconds compared to 11 seconds for Gemma 3n.

The HuggingSnap app, an open-source model, enables Visual Intelligence capabilities on an iPhone and serves as a sign of future advancements in on-device AI. Users can import their own AI models if they are converted into the .litertlm or .task file format and pushed into the phone's 'download' folder.

In summary, the shift towards on-device AI is an exciting development in the world of smartphone technology. With apps like Google AI Edge Gallery, users can enjoy a more secure and efficient way of handling AI tasks, all while keeping their personal data on their devices.

Read also:

Latest