Multi-Agent Collaboration

Google’s New Paper

  • TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture
  • Google has just built a self-organizing AI, or more precisely, a multi-agent network called TUMIX.
  • It’s a system where multiple AIs work together. Each agent uses a different tool: one writes code, one runs searches, and another reasons in plain text.
  • They independently solve the same problem, share their answers, and go through several rounds of refinement until the group reaches a consensus.
  • The results are stunning. Gemini-2.5 running TUMIX improves performance by up to 17.4% over other reasoning systems, at about half the inference cost. No retraining, no new data, just smarter coordination.

TUMIX proves that intelligence can emerge from organization, not just scale.

  • Consider the progress of human civilization. The role of individual geniuses is undeniable, but the power of large-scale group collaboration is even more important.
  • I’ve always been puzzled by something. Current large models have learned almost all of humanity’s (publicly available) knowledge within a single model. However, this knowledge is often inconsistent and sometimes even contradictory. How do these large models achieve self-consistency?
  • For example, each of us, despite our limited knowledge, is self-consistent at any given moment. We don’t simultaneously believe in two contradictory ideas.
  • Individuals form societies and nations, with a division of labor, similar to Darwinian natural evolution. Correct (or viable) ideas triumph over incorrect ones, step by step, driving the progress of the entire society.
  • The next leap in reasoning might not be a trillion-parameter model, but a network of smaller models learning to think together.