Google’s Gemma models have crossed the 150 million download mark, a huge one for a suite of tools released just months ago.
The announcement came directly from Omar Sanseviero, a developer relations engineer at Google DeepMind, who confirmed the figures on X over the weekend.
In addition to the downloads, more than 70,000 custom variations of Gemma have already been built on Hugging Face, a widely used platform among developers.
Gemma, which launched in February 2024, is part of Google’s initiative to compete with other open model providers, especially Meta’s Llama series.
The latest versions of Gemma can work with both images and text, a capability known as being “multimodal.” They also support more than 100 languages and include versions optimised for tasks like drug research.
However, when placed side-by-side with Meta’s Llama, Gemma’s traction seems to be on a moderate level. Llama passed 1.2 billion downloads in April 2025, nearly ten times what Gemma has achieved. In a space where momentum matters, such a gap is hard to ignore.
But volume alone doesn’t determine usefulness. What’s more important is what developers can actually do with these models. And that’s where the trouble starts.
Both Gemma and Llama have been flagged by developers for having restrictive and unclear licensing. These are not the standard open licenses many in the community are used to. Some have argued that the vague terms make it risky to deploy the models in commercial settings.
That has led to hesitation, particularly among startups and businesses that can’t afford legal ambiguity. I’ve spoken with teams who say they’ve skipped over both Gemma and Llama entirely for this reason. The concern isn’t about capability, but clarity.
So while Google celebrates 150 million downloads, there are issues about what those downloads actually represent. Are developers using the models in real products? Or are they just experimenting, unsure whether they can actually go to market with them?