Today Google is announcing two new AI models: Gemma 2B and Gemma 7B. Each of the models is released “with pre-trained and instruction-tuned variants.”
In a press briefing, Tris Warkentin, the director of Google DeepMind, described the new models as “a new family of state-of-the-art smaller models, which will enable developers to do research and AI development in the open sphere with tools that are based on the research and technologies that we also use to build Gemini as well.”
In addition, Google is releasing a new “Responsible Generative AI Toolkit,” that provides guidance and tools for “creating safer AI applications with Gemma.”
According to Google, Gemma already beats Meta’s Llama 2 and Mistral in benchmark tests. “We are partnering with both Nvidia and Hugging Face and so they [the benchmarks] will be incorporated in the open LLM leaderboard from day zero,” said Warkentin.
How Open Is Gemma?
What does Google mean by “open” in this case? Jeanine Banks, VP and GM of Developer X and DevRel at Google, clarified that it does not mean open source.
“This term [open] has become pretty pervasive now in the industry, and it often refers to open weights models, where there is wide access for developers and researchers to customize and to fine-tune models,” she said. “But at the same time, the Terms of Use things like redistribution as well as ownership of those variants that are developed vary based on the models own specific terms of use. And so we see some difference between what we would traditionally refer to as open source, and we decided that it made the most sense to refer to our general models as open models.”
Later in the briefing, Banks was asked whether the Gemma models can be used commercially — or are they restricted to research and development activities. She replied that Google is providing a “commercially permissive license” for Gemma.
“So you won’t see any restrictions on the types of organizations that can use this, or whatever size of those organizations might be, or how many users they may have for their products,” she said. “So it is commercially permissive. We do want to take a responsible approach, though. And so we do have terms that really restrict, you know, promoting harm by using these models.”
She added that Google will monitor this “as we see how developers and researchers use these tools.”
Developer Tooling
Google will provide support for popular frameworks, such as JAX, PyTorch and TensorFlow, via native Keras 3.0 — the deep learning API developed by Google for implementing neural networks.
Further support comes in the form of ready-to-use Colab and Kaggle notebooks, plus integration with popular tools such as Hugging Face, MaxText, and NVIDIA NeMo.
Google added that pre-trained and instruction-tuned Gemma models can run on your laptop, workstation, or Google Cloud, with deployment options for Vertex AI and Google Kubernetes Engine (GKE).
Given that these are smaller models than large language models (LLMs) — including Google’s own Gemini models — what kinds of applications might be built with Gemini?
“So we think that there are a wide variety of applications for these models,” said Warkentin. “In fact, if you look at usage across the ecosystem, this is the most common size that developers would like to use — the 7B size for sort of text generation and understanding applications, from an open model standpoint.”
He added that the “generation quality” of such models has “gone significantly up in the last year.” So it is no longer necessary to use LLMs for a lot of application use cases.
“So this unlocks completely new ways of developing AI applications that we’re pretty excited about,” he continued, “including being able to run inference and do training on your local developer desktop or laptop, with your RTX GPU or on a single host in GCP, with cloud GPUs as well.”
Originally published at The New Stack: https://thenewstack.io/gemma-google-takes-on-small-open-models-llama-2-and-mistral/

