Unveils Gemma 4: A New Era for Open AI Models with Apache 2.0 Licensing.
Today, we are thrilled to announce the next evolution in our open model family: Gemma 4.
Built from the same world-class research and technology behind Gemini 3, Gemma 4 isn’t just an incremental update—it’s a breakthrough in intelligence-per-parameter. For the first time, we are moving beyond "open-weight" to a fully open-source foundation by releasing the entire Gemma 4 family under the Apache 2.0 license.
![]() |
*this
image is generated using AI for illustrative purposes only. |
What is Gemma 4?
Gemma 4 builds on the success of its predecessors, offering improved performance, efficiency, and scalability. Designed to run across a wide range of devices—from local machines to cloud environments—Gemma models are known for striking a balance between power and accessibility.
With this new version, developers can expect:
- Enhanced reasoning capabilities
- Better context handling
- Improved efficiency for edge deployment
- Stronger multilingual support
These upgrades make Gemma 4 suitable for everything from chatbots and coding assistants to research tools and embedded AI applications.
The Shift to Apache 2.0 License
Perhaps the most significant part of this announcement is the transition to the Apache 2.0 license, one of the most permissive and widely adopted open-source licenses in the world.
Why This Matters
Switching to Apache 2.0 means:
- Commercial freedom: Developers and companies can use Gemma 4 in commercial products without restrictive obligations.
- Modification rights: Users can freely modify and distribute the models.
- Patent protection: The license includes explicit patent grants, reducing legal uncertainty.
This move removes many of the gray areas that previously existed around AI model usage, making it easier for startups, enterprises, and independent developers to build on top of Gemma.
A Strategic Move in the AI Landscape
The release of Gemma 4 under a permissive license comes at a time when competition in the AI space is intensifying. By lowering barriers to entry, Google is positioning itself as a key player in the open-model movement.
This strategy could:
- Accelerate innovation by empowering the developer community
- Encourage wider adoption of Google’s AI ecosystem
- Challenge other model providers to adopt more open practices
What Developers Can Expect
For developers, Gemma 4 represents an opportunity to:
- Build AI-powered applications without heavy infrastructure costs
- Customize models for niche use cases
- Deploy locally with greater control over data privacy
The combination of performance improvements and licensing freedom makes Gemma 4 one of the most developer-friendly AI releases to date.
One Family, Four Versatile Sizes
Gemma 4 is designed to run everywhere, from the smartphone in your pocket to high-end data center workstations.
| Model | Parameters | Architecture | Context Window | Primary Use Case |
| E2B (Effective 2B) | 2.3B Effective | Dense | 128K | Mobile, IoT, Raspberry Pi |
| E4B (Effective 4B) | 4.5B Effective | Dense | 128K | Advanced mobile, local web apps |
| 26B A4B | 26B (3.8B Active) | MoE | 256K | Low-latency coding & agents |
| 31B Dense | 30.7B | Dense | 256K | Frontier-level reasoning & math |
Gemma 4 isn’t just another model update—it’s a statement. By pairing technical advancements with a truly open license, Google is helping to democratize access to powerful AI tools.
As the open AI movement continues to evolve, Gemma 4 could become a cornerstone for developers looking to innovate without limits.
We’ve seen over 400 million downloads since the first Gemma launch.


No comments