AI Tools — Category Research Report

Your questions, your creative work, your business strategies — fed into models you don't own.

Your questions, your creative work, your business strategies — fed into models you don't own, improving products you don't control. This is the landscape, the data, and the opportunity.


The Landscape

The AI tools market barely existed three years ago. It is now projected to exceed $100B annually by 2027.

ProductOwnerEst. UsersPricingRevenue Model
ChatGPTOpenAI~200M weeklyFree / $20/mo Plus / $200/mo ProFreemium subscription
GeminiGoogle/Alphabet~100M+Free / $20/mo AdvancedBundled + subscription
ClaudeAnthropic~20M+Free / $20/mo Pro / $30/mo MaxFreemium subscription
CopilotMicrosoft~50M+Free / $20-30/moBundled + subscription
PerplexityPerplexity AI~15M+Free / $20/mo ProFreemium subscription
MidjourneyMidjourney Inc.~15M$10-60/moSubscription only
CursorAnysphere~5M+Free / $20/mo ProFreemium subscription

Notable: nearly every product charges $20/month for premium access. This price convergence is not coincidence — it reflects the current cost structure of running large language models at scale. But costs are falling rapidly while prices remain static. The margin between cost and price is widening every quarter.

Open-weight alternatives: Meta's Llama, Mistral, DeepSeek, and others provide capable models that anyone can run. The gap between open-weight and proprietary models is narrowing. This matters because it means a Your 99 AI product does not need to build its own foundation model — it can build on open-weight infrastructure.


The Enshittification Timeline

This category is unique — it's enshittifying before it even matures.

  • 2022-2023: The free era. ChatGPT launches free. Captures 100 million users in two months — the fastest consumer adoption in history. The free tier is genuinely good. Users form habits, integrate AI into their workflows, become dependent.

  • 2023: Monetization begins. ChatGPT Plus launches at $20/month. The free tier starts degrading — slower responses, limited model access, usage caps. The pattern is familiar: give generously to create dependency, then charge.

  • 2024: The squeeze. Free tiers across all platforms become functionally limited. GPT-4 access on free is heavily throttled. Gemini Advanced and Claude Pro gate the best models behind $20/month. Meanwhile, the cost of running these models drops by 10-50x due to hardware and efficiency improvements. Prices do not drop.

  • 2024-2025: Enterprise pivot. AI companies shift focus to enterprise sales ($25-60/user/month). Consumer features stagnate. The consumer product becomes a funnel for enterprise sales, not a product built for consumers.

  • 2025: The data realization. Users begin to understand that every conversation is stored and potentially used for training. OpenAI's terms allow using prompts to "improve models" unless opted out — and opting out reduces functionality. Your business strategies, medical questions, creative writing, personal reflections, code — all potential training data. This is arguably the most intimate data any company has ever collected: a real-time stream of what you think, need, and create.

  • 2025-2026: Price-performance gap widens. Inference costs drop dramatically (Llama 3 runs locally on modern hardware). Yet subscription prices remain $20/month across the board. The gap between what AI costs and what users pay grows wider — the difference is pure margin for companies that captured users when costs were genuinely high.


The Data Audit

What AI companies collect from your conversations:

  • Every prompt you type (your questions, instructions, context)
  • Every response generated (often containing your proprietary information reflected back)
  • Conversation metadata (timing, frequency, topics, patterns)
  • File uploads (documents, images, code, spreadsheets)
  • Usage patterns (which features, how often, what workflows)
  • Feedback signals (thumbs up/down, regenerations, edits to outputs)

The training data problem: When you use ChatGPT to refine a business plan, debug proprietary code, or write a legal document — that interaction may be used to train future models. The model that competes with your business may have learned from your business strategy. The code assistant that your competitor uses may have been improved by your code.

The opt-out trap: OpenAI offers a "don't train on my data" toggle. Enabling it disables conversation history. You choose between privacy and functionality. Anthropic and Google have similar trade-offs buried in settings. The message is clear: your data is the price of admission.

What happens at acquisition or pivot: AI companies hold potentially the most valuable dataset ever assembled — the real-time intellectual output of hundreds of millions of people. If OpenAI is acquired, restructured, or changes policy, this data goes with it. Users have no governance rights, no consent mechanism, and often no awareness of what's being collected.


Vulnerability Score

CriterionRatingExplanation
User resentmentHighPrice complaints, privacy awakening, degraded free tiers. Growing awareness that "you are the training data."
Switching costLowConversations don't have network effects. There's no social graph to rebuild. Switching means starting fresh conversations — most users can do this instantly.
Technical feasibilityMedium-HighBuilding a competitive AI interface is feasible. The challenge is model access — but open-weight models (Llama, Mistral, DeepSeek) close this gap. You don't need to train a model. You need to run one well.
Monetization clarityVery HighUsers already pay $20/month and consider it expensive. A $10/month alternative with ownership would be immediately compelling.
Data sensitivityVery HighAI conversations contain everything: business strategies, medical questions, personal reflections, creative work, proprietary code. This is the most intimate data stream in technology.
Network effectsVery LowAI tools are almost purely single-user. Your experience doesn't depend on other users being on the platform.

Overall vulnerability: Very High. Low switching costs, low network effects, proven willingness to pay, growing resentment, extreme data sensitivity, and viable open-weight model alternatives. This category is among the most vulnerable to a user-owned alternative.


The Your 99 Blueprint

Revenue model: $10/month (half of ChatGPT Plus) or pay-per-use for heavy consumers. Built on open-weight models (Llama, DeepSeek, Mistral) run on efficient infrastructure. As costs continue falling, the price can decrease further — the opposite of the incumbent pattern.

Draft Contribution Map:

ContributionStake per month
Active use (10+ sessions/month)10 base units
Paid subscription ($10/month)30 base units
AI output feedback (quality ratings, corrections)10-50 units (scaled by value)
Bug reports (verified)5 bonus units
Prompt templates shared (used by others)10 bonus units
Referral (becomes active 30+ day user)15 bonus units

The RLHF insight: The Contribution Map for AI tools includes something unique: feedback on AI outputs as a first-class contribution. When you tell the AI "this answer was wrong, here's why" or "this code has a bug, here's the fix" — that is RLHF data. The same data that companies pay millions to collect from contractors. In Your 99, your expertise earns you ownership.

Economics at scale:

ScaleUsersPaying %Monthly RevenueCompute CostsDistributableBuilder 1%Per Paying User
Small10,00050%$50,000$15,000$29,750$298$5.29
Medium100,00050%$500,000$150,000$297,500$2,975$5.29
Large500,00050%$2,500,000$750,000$1,487,500$14,875$5.29

(Assumes $10/month, ~30% compute costs, ~5% other operating costs, standard 1%/10%/89% split)

The pitch in one line: You pay $10. You get ~$5.29 back. Half-price AI — and you own it. And your feedback actually improves the product for everyone, including you.

Key differentiator beyond ownership: Transparent data policy — your conversations are yours, period. No training on user data without explicit community governance approval. Full conversation export. Model choice (use whichever open-weight model suits your task). And the RLHF loop: your expertise improves the product, and you're rewarded for that improvement.

Minimum viable feature set: Chat interface with model selection, conversation history, file/image upload, code highlighting, conversation sharing (user-controlled), conversation export. Phase 2: custom instructions/personas, API access, specialized tools (writing, coding, analysis). Phase 3: community-trained model improvements, specialized domain models.


Open Questions

  • Can open-weight models genuinely compete with GPT-4/Claude on quality for everyday use? The gap is narrowing, but is it narrow enough today? What about in 6 months?
  • How do compute costs scale with users? At 500,000 users doing 10+ sessions/month, what's the real infrastructure cost? Can community-contributed computing resources (see Vision document) offset this?
  • Is the real product a general AI assistant, or specialized AI tools (AI for writing, AI for code, AI for health, AI for education)? Specialization might create stronger Contribution Maps and clearer ownership value.
  • Could the Your 99 community become a source of high-quality RLHF data at scale? Millions of domain experts providing feedback in their fields of expertise — this would be unprecedented and enormously valuable.
  • Should this be a standalone product or an AI layer integrated into other Your 99 products (notes, social, productivity)? The "AI for everything" approach vs. the focused tool approach.
  • What about the environmental cost of running AI models? Should the community governance include compute budget decisions?

Report version: 0.1 (initial draft — community discussion needed) March 2026 See Research Template for methodology