Advertisment

MWC 2024: All About Qualcomm's On-Device Generative AI for Android Smartphones

Qualcomm demonstrated several new generative artificial intelligence (AI) technologies for Android smartphones at the Mobile World Congress (MWC) 2024 event. These functionalities will be driven by Snapdragon and Qualcomm platforms and internal to the device.

author-image
Preeti Anand
New Update
MWC 2024

Qualcomm demonstrated several new generative artificial intelligence (AI) technologies for Android smartphones at the Mobile World Congress (MWC) 2024 event. These functionalities will be driven by Snapdragon and Qualcomm platforms and internal to the device. Aside from introducing a dedicated large language model (LLM) for multimodal replies and an image production tool, the company also included over 75 AI models developers can utilise to create customised apps.

Advertisment

Qualcomm announced all of the AI technologies it showed during the MWC in a blog post

One notable feature is that Qualcomm's AI models are completely localised within the device, unlike most recent AI models such as ChatGPT, Gemini, and Copilot, which process the information on servers. The on-device features and apps created with these models can then be personalised for users while reducing privacy and reliability concerns. More than 75 open-source AI models, including Whisper, ControlNet, Stable Diffusion, and Baichuan 7B, are now available to developers via Qualcomm AI Hub, GitHub, and Hugging Face.

According to the corporation, because these AI models are tailored for its platforms, they will use less computational power and cost less to build apps. However, the fact that all 75 models are modest and designed for specific activities also plays a role. While customers will not see a one-stop shop chatbot, they would provide numerous use cases for specialised jobs such as image editing or transcription.

Advertisment

Qualcomm has added numerous automated methods to its AI library

Qualcomm has added numerous automated methods to its AI library to speed up the development of apps based on the models. "The AI model library automatically handles model translation from source framework to popular runtimes and works directly with the Qualcomm AI Engine direct SDK, then applies hardware-aware optimisations," according to the release.

Aside from the modest AI models, the American semiconductor manufacturer introduced LLM tools. These are still in the research stage and were only demonstrated during the MWC event. The first is Large Language and Vision Assistant (LLaVA), a multimodal LLM with over seven billion parameters. Qualcomm claims it can receive various data inputs, including text and photographs, and engage in multi-turn conversations with an AI assistant about an image.

Advertisment

Conclusion

Another approach that was demonstrated is called Low-Rank Adaptation (LoRA). It was shown on an Android smartphone and can produce AI-powered photos through Stable Diffusion. It is not an LLM, but it can minimise the trainable parameters in AI models, making them more efficient and scalable. In addition to image generation, Qualcomm claims that it may be used to develop bespoke personal assistants, better language translation, and other applications.

 

Advertisment