
The models keep getting better, but the number of organizations that can actually train them keeps shrinking. A frontier training run now costs tens of millions of dollars in compute alone. The GPU supply chain is controlled by a handful of companies, and the labs that can afford to play at the frontier are increasingly keeping their work behind closed doors.
This is a structural problem. When only three or four organizations on the planet can train state-of-the-art models, the entire field's direction gets shaped by their priorities, their values, and their commercial incentives. The research community that built the foundations of modern AI (openly, collaboratively, across universities and labs around the world) is increasingly locked out of the frontier.
Prime Intellect was founded to fix this. And with over $70 million raised from Founders Fund, Menlo Ventures, and some of the most respected names in AI, they're one of the few companies with both the vision and the resources to pull it off.
The Problem They're Solving
To understand why Prime Intellect matters, you need to understand the two bottlenecks that are centralizing AI development.
The first is compute access. Training a large model requires thousands of high-end GPUs running in coordination for weeks or months. That kind of infrastructure is expensive to build, expensive to operate, and almost impossible to access on-demand unless you're a major cloud customer or a very well-funded lab. For everyone else (independent researchers, startups, universities, open-source collectives) the cost of admission keeps going up.
The second is architectural. The standard approach to large-scale training assumes you have a single, tightly connected cluster of GPUs sitting in one data center. That assumption worked when models were smaller. But now it means your training scale tops out at whatever cluster size you can afford. And the largest clusters in the world sit inside Big Tech data centers.
Prime Intellect attacks both of these simultaneously. They've built a global compute exchange that pulls together GPU clusters from around the world, along with the distributed training infrastructure to make all of those clusters work together as one, even when they're on different continents.
The Tech That Makes It Work
The interesting part is how they pulled it off.
Distributed training across geographically separated clusters is a hard problem to solve. The latency between data centers on different continents is orders of magnitude higher than the interconnect latency inside a single cluster. Gradient synchronization (the process of keeping model parameters consistent across all training nodes) becomes a major bottleneck when your nodes are separated by oceans. And hardware failures, which are already common at scale, become almost guaranteed when you're stitching together heterogeneous infrastructure from different providers.
Prime Intellect's core technical contribution is PRIME, an open-source distributed training framework designed to handle all of this gracefully. The key innovations are around fault tolerance and communication efficiency. If a GPU node in Tokyo goes offline during a training run, the system reroutes around it. Training continues in New York, London, and Singapore without interruption. The framework uses techniques like asynchronous gradient compression and intelligent sharding to minimize the communication overhead that would normally make cross-continent training impractical.
They've already proven this works at scale - multiple times.
INTELLECT-1 was their first major proof point: a 10-billion parameter model trained across GPU clusters on three continents simultaneously. It maintained 96% compute utilization despite transatlantic latency. To put that in context, many organizations struggle to get 96% utilization within a single data center. Doing it across the open internet was a genuine first.
INTELLECT-3 raised the bar further. It's a 106-billion parameter Mixture-of-Experts model trained on the same distributed infrastructure, and it competes with state-of-the-art closed models on reasoning and coding benchmarks. The fact that a decentralized training setup can produce models at this level of quality is a meaningful signal about where this technology is headed.
For engineers who work on ML infrastructure, these are some of the most interesting systems problems in the industry right now. You're dealing with distributed systems at global scale, fault-tolerant computing across unreliable networks, optimization under real-world latency constraints, and the intersection of systems engineering with cutting-edge ML research. These problems don't neatly separate into "infra" and "research." Solving them requires both.
Why Open Source Isn't a Talking Point Here
Most companies treat open source as a nice-to-have. At Prime Intellect, it's foundational to the entire model.
They open-source their training recipes, model weights, and datasets. Not selectively, not after a delay. As a default. INTELLECT-1 and INTELLECT-3 are fully open. The PRIME framework is open-source. Their research is published openly.
This matters because it lets the broader research community build on their work, but it also reflects something deeper. Prime Intellect believes AI development is too important to be locked up by a handful of closed organizations, and open-sourcing everything is how they back that up.
They're also developing a decentralized protocol where people who contribute compute, code, or data to training a model actually own a stake in the resulting AI. It's an attempt to create an economic model where the value generated by AI training flows back to the people who made it possible, rather than being captured entirely by the company that coordinated the run.
Whether or not you're sold on the crypto/protocol angle, the underlying idea is compelling: what if the people who build AI actually owned it?
The Founders and the Team
Vincent Weisser (CEO) comes from the decentralized science world. He co-founded VitaDAO and Molecule before starting Prime Intellect, and he's been one of the most vocal advocates for keeping AI development open and distributed. His background is in building decentralized systems and communities around shared resources. That perspective shapes how Prime Intellect thinks about the problem.
Johannes Hagemann (CTO) is the technical counterweight. He's a distributed ML specialist who architected the training infrastructure from the ground up. His background is in building systems that scale training across unreliable, heterogeneous hardware, which is exactly the hard problem at the core of Prime Intellect's platform.
The team has grown over 200% in the past year, with hires from Google, Meta, and Stanford. Their investor and advisor bench is stacked: Andrej Karpathy (OpenAI, Tesla), Clem Delangue (CEO of Hugging Face), Emad Mostaque (Stability AI), and Balaji Srinivasan. When people at that level are writing checks and lending their names, it's a strong signal about the technical credibility of the work.
On the funding side, they closed a $15M Series A in early 2025, followed by a larger Series B led by Founders Fund in December 2025, bringing total funding past $70 million. For a company of their size, that's an exceptional amount of capital, and it means they have the runway to take real swings at hard problems without the pressure to prematurely commercialize.
What It's Like to Work There
Prime Intellect is headquartered in San Francisco with an office in Berlin, and they operate as a remote-friendly team. They do quarterly off-sites, hackathons, and conference trips. Compensation includes salary, equity, and token incentives tied to their decentralized protocol. They sponsor visas and offer relocation support for international candidates.
But honestly, the real reason to work here is the work itself.
For engineers who want to work on frontier AI without contributing to the consolidation problem, there aren't many options. Prime Intellect is one of them. The technical challenges are real, the commitment to openness is baked into the company's DNA, and the track record (INTELLECT-1, INTELLECT-3, PRIME) shows they can ship, not just talk.
Open Roles
Prime Intellect is currently hiring across 19 open positions, including:
Research Engineers (Distributed Training, Reinforcement Learning)
Members of Technical Staff (Inference, GPU Infrastructure, Full Stack)
Applied Research (RL & Agents, Evals & Data)
AI Research Residents
San Francisco-based, remote-friendly, with visa sponsorship and relocation assistance.
👉 Apply for our next Match Day to get access to Prime Intellect's open roles.



