Your 99 — The Vision
What a continent of minds can build.
This document is honestly idealistic. We say so upfront. But idealism grounded in arithmetic is not fantasy — it is engineering. And the alternative to idealism right now is surrender.
The Honest Disclaimer
What follows is a vision, not a plan. The plan is in the Roadmap. The mechanism is in the Agreement. This document describes where the road leads if the math holds — and if enough people decide that the trajectory we're currently on is not the one they want.
Some of this will sound ambitious. It is. But consider: every paragraph below is less ambitious than what five AI labs are already attempting with a fraction of humanity's input and zero of its consent.
The World We're In
Five companies — OpenAI, Google DeepMind, Anthropic, Meta AI, and xAI — are on a path to build systems that could automate most human intellectual work. This is not speculation. It is the stated goal of their founders, the focus of their research, and the trajectory of their capabilities.
These companies are controlled by a remarkably small number of people. Governance structures concentrate decision-making power: single individuals hold majority voting rights, boards have limited authority, and users have none. The most consequential technology in human history is being directed by a handful of people in a handful of offices.
Geoffrey Hinton — the scientist who received the Nobel Prize for foundational work in deep learning — left Google to speak freely about extinction risk from AI. He did not say it was certain. He said it was possible, and that possibility alone demands attention.
Yoshua Bengio, another founding figure of modern AI, has called for international governance of AI development. Stuart Russell, who wrote the standard textbook on artificial intelligence, has argued that current development approaches create systems whose goals may not align with human welfare.
These are not fringe voices. These are the people who built the foundations. And they are saying: we should be paying attention.
The response from most of humanity has been to watch. To nod. To share articles about AI risk and then open ChatGPT. To acknowledge the concern and do nothing about it.
Doing nothing is a choice. And right now, it is the worst choice available.
The Alternative
What if the people who use AI — the people whose work trains it, whose feedback improves it, whose daily lives are shaped by it — had a voice in how it's built?
Not a comment box. Not a feedback form. Not a petition. Actual ownership. Actual governance. Actual power to shape the direction of the tools they depend on.
This is not abstract. The mechanism exists. The Agreement defines it. Living Stake gives weight to contribution. Governance gives voice to stakeholders. The only thing missing is scale.
And scale is what this document is about.
Part 1 — Community as AI Laboratory
The RLHF revolution that hasn't happened yet
Today, AI systems are improved through Reinforcement Learning from Human Feedback — RLHF. Companies hire contractors to evaluate AI outputs: is this response helpful? Is it accurate? Is it harmful? This feedback shapes the model's behavior.
The pool of people providing this feedback is small — thousands, not millions. They are often contractors working quickly through evaluation tasks, paid per task, with limited domain expertise. A contractor evaluating a medical response may have no medical training. A contractor rating a legal analysis may have no legal background.
Now imagine the alternative.
A community of millions of people — organized by profession, by expertise, by domain knowledge — providing feedback on AI systems they own. Not as contractors. As owners.
- A nurse uses a Your 99 health AI assistant. When the AI gives a medication interaction warning that's slightly wrong, she corrects it. Her correction is a contribution. She earns stake. The AI improves for every nurse who uses it after her.
- A tax accountant uses a Your 99 financial tool. When the AI misapplies a tax rule, he corrects it with the specific regulation. His expertise becomes training data. He earns stake. The AI becomes more reliable for every accountant.
- A teacher uses a Your 99 education platform. When the AI explains a concept poorly to a 12-year-old, she rewrites the explanation. Her pedagogical knowledge shapes the AI. She earns stake. The AI serves students better.
- A musician uses a Your 99 creative tool. When the AI generates something that sounds derivative, she provides feedback on originality, on feeling, on what makes music alive. Her taste and judgment become the training signal. She earns stake.
- A software engineer reviews AI-generated code. She catches a subtle race condition the AI missed. Her expertise flows into the system. She earns stake. The AI writes better code for everyone.
This is not hypothetical. This is RLHF at a scale no single company can achieve. Not 10,000 contractors. Not 100,000 beta testers. Millions of domain experts, in every profession, in every language, in every country — improving AI because they own the result.
The quality of this feedback would be unprecedented. A nurse correcting a medical AI output is providing feedback worth orders of magnitude more than a general contractor doing the same evaluation. Domain expertise is the scarcest resource in AI alignment, and we are proposing to tap the largest pool of it that has ever existed.
Why companies can't do this
OpenAI cannot build this. Not because of technology — because of structure. Their users are customers, not owners. There is no mechanism to compensate users for the feedback that improves the model. There is no governance that lets users shape the AI's direction. The feedback flows one way: from user to company. The value flows one way: from company to shareholders.
Your 99 reverses both flows. Feedback flows from user to product. Value flows from product to user. The user's expertise makes the AI better, and the user owns 99% of that improvement.
Part 2 — Community as Infrastructure
The idle machines
Right now, billions of computing devices sit idle most of the time. Your laptop sleeps while you eat lunch. Your desktop is unused while you sleep. Your phone's processor is 95% idle while you read this sentence.
The combined computing power of these idle devices is staggering. Conservatively: hundreds of millions of devices with modern processors and GPUs, available for a combined billions of hours per day.
This is not a new idea. SETI@home demonstrated distributed computing in 1999. Folding@home assembled one of the most powerful computing systems on Earth from volunteers' idle machines to simulate protein folding. BOINC has enabled dozens of scientific computing projects through donated cycles.
What's new is the ownership model.
In previous distributed computing projects, contributors donated resources for free. There was no mechanism to compensate them. No governance to let them shape how resources were used. No economic alignment between contribution and reward.
In Your 99, contributing computing resources is a contribution like any other. Your idle GPU cycles help train or run AI models for Your 99 products. Your contribution is tracked. You earn stake. The community's combined computing power becomes infrastructure that no single company needs to fund — and that no single company controls.
What this means at scale
At 1 million members contributing idle computing: a distributed computing network rivaling mid-tier cloud providers.
At 10 million members: a network approaching the scale of specialized AI training clusters.
At 100 million members: a computing resource that no single company on Earth could match, owned entirely by its contributors, governed by its users.
This is not tomorrow's project. It requires trust, infrastructure, security solutions, and a community that has already proven the basic model works. But it is the direction. And every decision we make now should preserve the path to this possibility.
Part 3 — Collective Intelligence
What volunteers built
Wikipedia: the world's largest encyclopedia, written entirely by volunteers, with no paid editorial staff. Available in 300+ languages. Used by billions. Created by people who contributed because they wanted to, not because they were paid.
Linux: an operating system that runs most of the internet, most of the cloud, most of Android devices, most of the world's supercomputers. Created and maintained by a distributed community of contributors. No single company controls it.
Open-source software broadly: the foundation of modern technology. React, Python, PostgreSQL, Kubernetes, TensorFlow — the tools that power the world's software were built collaboratively, often without direct compensation.
These projects proved something extraordinary: that distributed communities of motivated people can build things that rival or exceed what corporations with billions of dollars produce.
What owners could build
Now add what volunteers never had:
Economic alignment. Every contributor earns proportional to their contribution. Not charity. Not goodwill. Ownership.
Governance. Contributors don't just build — they decide. What to build next. How to allocate resources. What direction to take. The people doing the work shape the work.
Scale of motivation. Volunteering attracts a fraction of the potential pool — people with enough time, privilege, and idealism to work for free. Ownership attracts everyone. The nurse who can't volunteer hundreds of hours can contribute domain expertise during her workday, using tools she already uses, earning stake she actually benefits from.
Compounding returns. In open source, contributors give and the community benefits. In Your 99, contributors give, the community benefits, the product improves, revenue grows, distributions increase, more people join, more people contribute. The flywheel has economic fuel that volunteering never had.
What Wikipedia did for knowledge. What Linux did for operating systems. Your 99 can do for every category of software. And then for AI. And then for computing infrastructure. And then for things we haven't imagined, because we've never had a community of this scale with this alignment.
Part 4 — Why Doing Nothing Is the Real Risk
This is not a recruitment pitch. This is a risk assessment.
The current trajectory is clear: AI labs build increasingly powerful systems, controlled by increasingly few people, trained on increasingly intimate human data, with no meaningful governance by the people affected.
The optimistic version of this trajectory: AI makes everything better, the companies are responsible stewards, and humanity benefits. Possible. Not guaranteed.
The pessimistic version: AI concentrates power further, eliminates jobs without distributing the economic gains, and the people who built the models make decisions that affect billions without accountability.
The realistic version is somewhere between. But the range of possible outcomes is wide — from enormous benefit to existential concern. And the people with the most at stake — all of us — have the least influence over which outcome we get.
Your 99 is not the only answer. But it is an answer.
It says: the people who use these tools, who generate the data that trains them, who provide the feedback that improves them, who depend on them daily — those people should have ownership and governance.
Not as an abstract principle. As a legal structure. As an economic mechanism. As a living, functioning community that proves the alternative is not just possible but preferable.
Waiting for governments to regulate AI has produced statements of concern and very little regulation. Waiting for companies to self-govern has produced ethics boards that get dissolved when they become inconvenient. Waiting for "someone" to fix this has produced waiting.
We are not waiting. We are building.
Not because we're certain it works. Because the cost of trying and failing is modest — some time, some effort, some software that wasn't needed. And the cost of not trying, if the trajectory continues unchecked, is everything.
What We're Building Toward
A community where:
- Every digital product you use is owned by its users
- AI is improved by the people who use it, and they own the improvement
- Computing power is shared, not hoarded
- Domain expertise is a recognized contribution, not an exploited resource
- The people who create value — by using, by building, by thinking, by caring — own the value they create
- The direction of the most powerful technology ever built is shaped by the many, not the few
This is not Phase 1. This is not Phase 5. This is the horizon. Every decision we make now — which product to build first, how the community discusses, what tone we set, what values we embed — either moves toward this or away from it.
We choose toward.
This document is idealistic. We said so at the top. But consider the alternative: accepting that five companies and their boards will determine the future of AI, of work, of how information flows, of what is true and what is generated. That is not pragmatism. That is surrender dressed as realism.
We prefer honest idealism to comfortable surrender.
Document version: 1.0 — March 2026 — your99.co/vision