Google’s new Gemini Nano large language model (LLM) is creating quite a buzz, promising faster on-device AI for your smartphone. How does this tiny model actually work? The secret lies in MediaTek’s collaboration with Google.
MediaTek recently announced that its latest Dimensity 9300 and 8300 5G chipsets are now optimized to run Google’s Gemini Nano LLM. This effort entailed tweaking MediaTek’s NeuroPilot toolkit and porting the model over to work smoothly on the chipmaker’s AI processing units (APUs). But what does this optimization really mean for you?
In simple terms, it translates to snappier performance for Gemini-powered features on your next smartphone. For instance, expect the Pixel Recorder app on the Pixel 8 Pro to generate transcripts quicker. Smart replies and predictions on Gboard will likely pop up faster too. Even Bixby or Siri-like voice assistants can respond more instantly to commands.
Beyond speed gains, Gemini integration also spurs innovation for on-device AI. Having an LLM locally allows apps to deliver experiences privately, without pinging the cloud. This paves the way for seamless next-gen features. We’re talking everything from real-time video captions to intelligent search and recommendations.
The best part? You won’t need an expensive flagship to enjoy these perks. MediaTek’s collaboration means capable mid-range phones rocking its Dimensity chips can also tap into Gemini’s intelligence. This democratization of on-device AI will likely trickle down to even more affordable devices.
So when can you expect to wield Gemini Nano’s powers? If you already own a Pixel 8 Pro or Galaxy S24 device, updates should deliver the goodness. For other phones, look out for MediaTek’s Dimensity-powered mid-rangers launching later this year.