Google has recently unveiled three innovative “open” generative AI models, which it describes as being “safer,” “smaller,” and “more transparent” than the majority of existing models. These new additions, named Gemma 2 2B, ShieldGemma, and Gemma Scope, extend the Gemma 2 family that was initially introduced in May. Each model is tailored for different applications, with a shared emphasis on safety.
Unlike the proprietary Gemini models, which are utilized within Google’s own products and made available to developers, the Gemma series aims to foster goodwill within the developer community, akin to Meta’s approach with its Llama initiative.
The Gemma 2 2B model is a lightweight tool designed for text generation and analysis. It is optimized to run on various hardware platforms, including laptops and edge devices. This model is accessible for certain research and commercial applications through platforms such as Google’s Vertex AI model library, Kaggle, and Google’s AI Studio toolkit.
ShieldGemma is a collection of “safety classifiers” developed to detect and filter toxic content, including hate speech, harassment, and sexually explicit material. Building on the foundation of Gemma 2, ShieldGemma is capable of filtering both input prompts and generated content, ensuring a safer user experience.
Gemma Scope offers an enhanced level of interpretability for the internal processes of Gemma 2 models. According to Google, Gemma Scope consists of specialized neural networks that dissect the complex information processed by Gemma 2. This expanded view allows researchers to gain valuable insights into how Gemma 2 identifies patterns, processes information, and makes predictions.
The introduction of these new Gemma 2 models coincides with a preliminary report from the U.S. Commerce Department, which supports the development of open AI models. In order to make generative AI more accessible to smaller businesses, researchers, organizations, and individual developers, the paper emphasizes the benefits of open models. It also underscores the necessity of monitoring these models for potential risks.
By releasing these models, Google aims to enhance the transparency and safety of AI technologies while providing the developer community with powerful tools for innovation. This strategic move reflects Google’s commitment to advancing AI in a responsible and inclusive manner.