Skip to main content

Here’s how the Gemini-powered Siri will likely work under the hood

Earlier this week, Bloomberg reported that Google and Apple are close to reaching a yearly $1 billion agreement for a version of the Gemini model that will power the revamped Siri next year.

But perhaps more interesting than the price tag is one factor that will actually affect everyone’s experience: its architecture. Here’s a look at how it will likely work.

Is 1.2 trillion parameters a lot?

According to Bloomberg’s report, Google will provide Apple with a 1.2 trillion parameter model, which will live on Apple’s Private Cloud Compute servers, effectively preempting Google from accessing any of it. Privacy-wise, that’s great.

Size-wise, a 1.2 trillion parameter model is nothing to sneeze at. However, a direct comparison with the latest and greatest competing models is quite challenging.

That’s because in recent years, closed frontier AI labs like OpenAI, Anthropic, and Google have stopped disclosing the parameter counts of their latest flagship models. This has led to wildly varying speculation as to the true parameter count of offerings such as GPT-5, Gemini 2.5 Pro, and Claude Sonnet 4.5. Some put them below a trillion parameters, while others suggest they reach a few trillion. In reality, nobody really knows.

On the other hand, one thing most of these huge latest models have in common is an underlying architecture known as mixture of experts (MoE). In fact, Apple already employs a flavor of MoE on its current cloud-based model, which is rumored to have 150 billion parameters.

Siri’s Gemini-powered model will likely use a mixture of experts

In a nutshell, MoE is a technique that structures a model with multiple specialized sub-networks called ‘experts.’ For each input, only a few relevant experts are activated, which results in a faster and more computationally efficient model.

In other words, this allows MoE models to have very high parameter counts, while keeping inference costs much lower than if 100% of their parameters had to be activated for every input.

Here’s another thing about models that take the MoE approach: they usually have a maximum number of active experts and a maximum number of active parameters for each input, resulting in something like this:

A model with 1.2 trillion total parameters might use 32 experts, with only 2–4 experts active per token. This means only around 75–150B parameters are actually making calculations at any given moment, giving you the capacity of a massive model while keeping computational costs similar to running a much smaller model.

Here’s a great video made by IBM that explains in more detail how MoE works:

To be clear, there have been no reports regarding the architecture of the model that Google may provide Apple with, should they seal the deal on their reported partnership. But at 1.2 trillion parameters, it is very likely that it will require the MoE approach to run efficiently, given the alternatives available today.

Whether that size will be enough to keep the Gemini-powered Siri competitive with the models that will be available by the time it launches next year, is a different story.

Accessory deals on Amazon

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Marcus Mendes Marcus Mendes

Marcus Mendes is a Brazilian tech podcaster and journalist who has been closely following Apple since the mid-2000s.

He began covering Apple news in Brazilian media in 2012 and later broadened his focus to the wider tech industry, hosting a daily podcast for seven years.