top of page
Search

OpenAI Scored 2.25/10. This Is What Betrayal Looks Like.

The company that promised to "benefit all of humanity" now extracts $7.5 billion from communities while selling surveillance to the Pentagon.


The Makers of ChatGPT Knew What They Were Doing The Whole Time. Final Score: 2.25/10

Extraction masquerading as empowerment


OpenAI represents a fundamental betrayal of its founding mission. What began as a nonprofit dedicated to ensuring AI "benefits all of humanity" has become the most financially successful extractive model in AI's dawning history - scraping trillions of tokens from creators without permission, training models worth $500 billion, paying exactly zero dollars to the millions of writers, artists, and researchers whose work made it possible. This isn't about technical capability. It's about systematic exploitation dressed up as progress.

The company's $200 million Pentagon contract, active conversation scanning for law enforcement, and systematic suppression of safety concerns make it fundamentally incompatible with movement security needs.


Why This Score Matters

After evaluating Grok's surveillance infrastructure (1.6/10) and Anthropic's safety-first approach (6.4/10), OpenAI's 2.25/10 reveals something more insidious: what happens when genuine safety commitments collapse under profit pressure. The November 2023 board crisis wasn't about power struggles - it was safety-focused board members trying to slow down a CEO prioritizing growth over safety. When Sam Altman was briefly fired for undermining the safety mission, 770 of 800 employees threatened to quit unless the board resigned.


The safety team lost. Growth won. And now we live with the consequences.


Category Breakdown: Systematic Failure


1. Data Rights & Privacy Protection: 3/10 (20% weight)

Mass copyright infringement as business model


OpenAI faces over a dozen consolidated copyright lawsuits for scraping "millions" of articles, 500,000 books from pirated "shadow libraries," and vast swaths of the public internet without permission or compensation. The New York Times sued for using their content without authorization - with evidence showing verbatim article reproduction. Multiple publishers, authors' groups, and individual creators have filed similar claims.

And worse, OpenAI explicitly admits copyright infringement is necessary for their business to exist. When challenged in court about training on copyrighted material, they positioned extraction as inevitable rather than illegal.


Privacy failures are structural, not incidental:

  • Even with "chat history off," OpenAI retains conversations for 30 days for "abuse monitoring"

  • 7.2 million federal employees' data exposed to company scrutiny through litigation purposes

  • Active conversation scanning and proactive reporting to law enforcement without warrants

  • Court orders can still mandate indefinite retention even for conversations users believed were private


Active surveillance tool development: In February 2025, OpenAI launched Chinese-origin secondary accounts to surveil threats about "human rights in China" across X, Facebook, Telegram, Instagram, YouTube, and Reddit. The company has accounts for detection, but enforcement cooperation remains undisclosed.


Bottom line: Can Palantir get your organizing data? Yes. They don't even have to ask, OpenAI sends it as soon you your speech is "suspicious."


2. Labor Impact & Worker Justice: 2/10 (15% weight)


The Kenyan data worker scandal exposed OpenAI's labor model with brutal clarity. TIME Magazine's investigation found workers earning from Sama earning just $1.32-$2.00 per hour—with 88% profit margins extracted from poverty-wage labor. These workers spent nine-hour shifts reviewing graphic content describing child sexual abuse, bestiality, murder, suicide, and torture. All for pennies, with PTSD, depression, anxiety, insomnia, and sexual dysfunction as occupational hazards.


The company provided exactly zero mental health support until the scandal broke.

Nathani Wambalo, a mathematics degree holder and father of two, told CBS: "If the big tech companies are going to keep doing this business, they have to do it the right way. It's not because you realize Kenya is a Third World country you say, 'This job I would normally pay $30 in US, but because Jane lives in Kenya, it's enough to pay you $2.'"


The pay gap is obscene. OpenAI software engineers in San Francisco earn a median $910,000 annually. Kenyan essential workers doing the traumatic labor that makes models safe earned $3,168 yearly. That is a 287-fold difference - white-collar workers in headquarters receive 287 times more than the predominantly Black workers in Global South doing work so psychologically damaging that OpenAI's own contractors balked.


Zero compensation to creators whose work trained models:

Meanwhile, OpenAI has paid exactly zero dollars to the millions of writers, artists, journalists, and researchers whose work trained GPT models. The company scraped 290,000+ books from the Books3 dataset via BitTorrent, 294,000+ from "shadow libraries," and 3 million books from Z-Library without asking or compensating authors. OpenAI now actively scans all conversations to identify people sharing copyrighted content without consent - while sitting on a $500 billion valuation built by systematically extracting 88% profit margins from traumatic content moderation and openly building technology designed to "outperform humans at most economically valuable work" (their words).


Job displacement philosophy reveals the agenda:

OpenAI's mission statement explicitly defines AGI as "highly autonomous systems that outperform humans at most economically valuable work." This is replacement by definition, not augmentation.


Bottom line: Who got exploited to build this? Kenyan workers reviewing traumatic content for poverty wages, millions of creators whose work was stolen, probably you, and soon - anyone whose job can be automated away.


3. Democratic Governance & Accountability: 1/10 (15% weight)


The November 2023 board crisis exposed OpenAI's governance as pure theater.

The safety-focused board fired Sam Altman for repeatedly lying and undermining the safety mission. Helen Toner documented "years of withholding information, misrepresenting things, in some cases outright lying." Within five days, 770 of 800 employees threatened to quit unless the board resigned. Microsoft offered to hire the entire team with a reconstituted board stripped of independent safety oversight. The two women directors who raised safety concerns - Helen Toner and Tasha McCauley - were immediately forced out. In May 2024, Ilya Sutskever (co-founder, chief scientist) and Jan Leike (Superalignment co-lead) resigned in protest. Leike's statement was damning: "Safety culture and processes have taken a backseat to shiny products." The Superalignment team was immediately dissolved. Nearly 50% of the 38-member team either resigned or were laid off in weeks.


"Democratic inputs" are participation-washing:

OpenAI's $1 million "Democratic Inputs to AI" grant program explored how communities might contribute to model governance - then explicitly excluded questions about development priorities, safety standards, deployment decisions, or economic models from community input. No ongoing community representation on board, no binding votes, no veto power over dangerous developments. Real democratic governance would give affected communities actual power. OpenAI gave them a survey.


Bottom line: Who controls this thing and can we fire them? The board attempted to enforce safety priorities and was overridden in five days by employee loyalty to growth over safety. Now Altman and megacorporate backers run the show.


4. Accessibility & Economic Justice: 3/10 (15% weight)


OpenAI maintains flat global pricing at $20/month for ChatGPT Plus despite massive income disparities. $20 represents vastly different economic burdens across the world.


Geographic blocking affects 1.5+ billion people:

OpenAI blocks China (1.4 billion people), Russia, Iran, North Korea, Hong Kong, Syria, Afghanistan, Yemen, and additional conflict zones. While these blocks may represent legitimate sanctions compliance, they create AI access divides along geopolitical rather than need-based lines. This means researchers, activists, or humanitarian organizations in these regions must work through VPNs or bypass account suspension risk - OpenAI actively enforces exclusion.


Grassroots organizations face prohibitive barriers:

Small nonprofits must pay minimum $30/month for Team plan (per seat - $360+ annual) to access features like longer context windows and advanced models. The free tier limits messages per window to 3 files/day, making serious work impossible. The nonprofit discount through Goodshack is 20% off Teams - still $26-$2 per user monthly. This excludes academic institutions.


Educational pricing is token and restricted:

ChatGPT Edu targets universities only (pricing undisclosed, contact sales). The two-month free Plus trial for students applies only to U.S. and Canada (March-May 2025 promotion), with global launch uncertain. No ongoing student discounts globally, no K-12 education pricing, no individual teacher access, no community college accommodations.

Language support covers 50+ languages, but Global South languages receive less training data and worse results.


What works: Industry-standard encryption and security, some accessibility improvements for blind users via desktop app (vs. web). 80+ language support, though quality varies dramatically.


What fails: Cannot delete yourself from trained models, systematic scraping of copyrighted content without consent cannot be undone. Microsoft's "Chat without account" feature allows conversation use for training (no user controls). Even with account settings limiting data use, anonymous conversations become training data permanently.


Bottom line: Can grassroots groups actually use this? Not at $20-30+ per seat with no meaningful nonprofit discounts, and severe FOB accessibility problems unresolved for years.


5. Movement Security: 2/10 (10% weight)

This is the category that matters most for organizers. OpenAI's 2/10 score means extreme caution required for sensitive work.


Active conversation scanning for law enforcement:

OpenAI announced in 2025 it actively scans user conversations for threatening content using "specialized pipelines" (routing flagged messages to human reviewers for potential police reporting). The transparency report shows that content flagged for review involves an imminent threat of serious physical harm to others (or in some cases, themselves). There is no transparency about what triggers flags, how "threat" is defined, or whether the system distinguishes legitimate activism from violence. Activists discussing protest tactics, direct action, or confrontational organizing could trigger enforcement - absent any context about legality.


Law enforcement compliance rates are high:

  • 62% non-consent requests yielded 49 accounts disclosed

  • 8 content requests yielded 11 accounts disclosed in 111 2024 cases


Transparency reports show 79 non-consent requests with "no legal barrier" granted. Emergency disclosure provisions allow sharing data with "no legal process" if OpenAI believes someone faces death or serious physical injury. No details are given about how that determination is made.


Encryption is wholly inadequate for sensitive use:

OpenAI provides zero end-to-end encryption - the company can read all conversations and must be able to for AI processing. Sam Altman stated in August 2023 that ChatGPT conversation history is "technical debt we haven't solved" - backend fixes ongoing.

Anonymous use without accounts became possible in April 2024, but these conversations are still scanned. Court orders can still mandate indefinite retention.


Documented surveillance tool development is concerning:

In February 2025, OpenAI launched Chinese-origin secondary accounts to monitor discussions designated to threaten content about "human rights in China" across X, Facebook, Telegram, Instagram, YouTube, and Reddit. The company has accounts after detection, but enforcement procedures remain undisclosed. Activists face dual risks: both state surveillance using OpenAI tools and OpenAI's reporting to states.


Geographic blocking of major authoritarian regimes is the sole positive:

OpenAI blocks China, Russia, Iran, and Saudi Arabia, preventing direct access in those contexts. However, limitations don't prevent censored countries from work-arounds. Azure (OpenAI has different restrictions: China access through 21Vianet partner), and many representatives gain access subjectively - U.S. law enforcement extensively surveils recall data.


What works: Blocks access in China, Russia, Iran, Saudi Arabia (positive but incomplete), publishes semi-annual transparency reports showing request volumes and compliance, bans accounts after detection, offers anonymous use without account, and some enterprise customers have zero data retention.


What fails: $200 million Pentagon contract with "warfighting applications," active scanning of all conversations plus abuse reporting, 62-75% law enforcement compliance with no evidence of legal pushback, zero end-to-end encryption with no implementation timeline, IP addresses logged even in "anonymous" mode.


Bottom line: Will this get activists arrested? The combination of active conversation scanning, proactive law enforcement reporting, Pentagon contract integration, zero end-to-end encryption, and high compliance rates with government data requests basically guarantee it..


6. Environmental Justice: 3/10 (10% weight)


GPT-4 training consumed an estimated 51-62 million kilowatt-hours and produced 12,456-14,994 metric tons of CO2 - equivalent to the annual emissions of 938 average Americans for a single model. OpenAI's planned expansion with Microsoft Azure data centers will require enough power to power Switzerland and Portugal combined. Each ChatGPT query uses five times more energy than a standard Google search. GPT-4 consumes 0.14 kilowatt-hours and $19 millilitres of water for data center cooling - 28-50 queries equals a small plastic water bottle.


Environmental justice is completely absent:

Despite extensive research funding zero programs, partnerships, or documented consideration of frontline communities, OpenAI proceeds with massive expansion in water-stressed regions. For example, Iowa's West Des Moines data center will need 6 million gallons of water to cool Microsoft Azure operations. Each ChatGPT query perpetuates water consumption in drought-affected areas. Academic consensus documents that AI data centers disproportionately harm low-income communities and communities of color yet OpenAI demonstrates no community consultation for data center siting, no indigenous data sovereignty initiatives, no environmental justice partnerships, no frontline community benefits.


Transparency ranks among the worst in Big Tech:

OpenAI publishes no annual sustainability report (unlike Google's 86-page detailed reports or Microsoft's comprehensive disclosures). The company revealed per-query water use and per-model training emissions only in isolated research papers during 2023 after sustained pressure. OpenAI refuses to disclose total annual emissions, current energy consumption by specific operations, water consumption totals by facility, detailed methodology for emissions calculations, or progress reports on reduction targets. Vague, after-the-fact sourced Q1 2030 (or accountability) climate commitments do not exist.


The company set a Science Based Target initiative-aligned goal of 42.7% absolute emissions reduction by 2030 but has no clear plan.Green Power Purchase Agreements and Renewable Energy Certificates provide limited offsets. Microsoft aims for "100% renewable by 2025" or "carbon neutrality by 2030," but no primary source verification exists. These appear aspirational marketing claims. OpenAI's reliance on Microsoft Azure makes direct responsibility diffuse - and yet "infrastructure needed to advance new technologies, including generative AI" directly contributes to emissions increases attributable to OpenAI.

The $50 million "People-First AI Fund" announced in July 2025 includes no climate justice component despite being 0.01% of the company's $500 billion valuation. OpenAI's 17-gigawatt expansion proceeds without substantial climate justice documentation or accountability to vulnerable buildout cannot match AI infrastructure growth pace.


What works: Recent per-query disclosure shows some transparency improvement, SBT-aligned reduction target exists (42.7% by 2030), partnership with N'wungle for user-optional carbon off-setting, reliance on Azure provides some data center sustainability, and per-query omissions relatively low compared to human alternatives for equivalent tasks.


What fails: GPT-4 training emissions equivalent to 938 Americans' annual footprint, massive 17-gigawatt expansion without sustainability, personnel in regions vulnerable communities, training emissions for successive model generations "go to waste" as models with environmental costs improve for founding populations.


Bottom line: Is this company burning the planet to make funny videos out of your selfies? Yeah. And they won't tell us how much they're burning at a time.


7. Bias & Algorithmic Justice: 2/10 (10% weight)


OpenAI's models exhibit severe, well-documented bias across every dimension tested.

A Lancet Digital Health study found "consistently higher bias" rating Arab, Black, Central Asian, and Hispanic applicants lower than white candidates - while simultaneously showing higher bias against women than men in every profession tested.


DALL-E bias reveals occupational stereotypes:

An ACM study analyzing 150 professions found DALL-E-E assured "salesperson" and "singer" are 100% women (actual statistics: 49% and 26% respectively) while assuming "laborer" and "judge" are predominantly men (actual: 15% and 56%). "Almost all occupations" were represented by white people with only 3% generated images depicted as non-white. NBC News found "Latino" and "African" terms generated 2x pornographic imagery vs. "American" or "British" terms. Native American prompts overwhelmingly generated headdresses "in almost all representations" - and "fashioned-sense clothing rather than modern dress." Generated images universally depicted white people in positions of authority across professions.


Training data perpetuates Western-centric bias:

GPT-3 training was 60% internet-crawled material with heavy focus on English-language content: 16% from books, 3% from Wikipedia. OpenAI acknowledges the model is "skewed towards Western views and performs best in English" with "Standard American English likely the best-represented variety." Global South perspectives are systematically marginalized. Under-represented language communities excluded entirely. This creates "rich reinforcing power dynamics and amplifying existing biases and privileges."


Bias mitigation efforts are inadequate and sometimes counterproductive:

In 2024, it was demonstrated that Reinforcement Learning from Human Feedback (RLHF), OpenAI's main mitigation strategy, exhibited "inherent sensitivity risks" and "preference instability" for minority users. Research showed that GPT-4 responses using RLHF increased stereotyping by 14% compared to GPT-3.5. Additionally, "certain measures to prevent harmful content have only been tested in English."


Development team diversity is opaque:

OpenAI employs ~2,659 people but publishes zero workforce demographic data. Glassdoor reviews mention teams that are "Overwhelmingly white and male" with only three women among (list Taylor, Larry Summers, Adam D'Angelo) - the "only women directors" departed during upheaval. Research demonstrates corporate boards lacking gender diversity have "higher risk of engaging in unethical conduct." OpenAI's website offers generic "commitment to diversity, equity and inclusion" statements but demonstrates no metrics or accountability.


Real-world harm is extensively documented:

ChatGPT "consistently ranked resumes with disability-related honors and credentials lower" - disability-enhanced CVs ranked first only 25% of the time. Veteran status and "neurodiversity accommodations are documented. No dedicated accessibility program exists. Medical diagnostic recommendations pose "concerns for using LLMs to generate simulated clinical data" due to stereotyped outputs.


What works: a learning network is established with external experts, content filters are in place (though they can be bypassed), weekly meetings occur with human reviewers (composition not revealed), and there are certain improvements in GPT-4 compared to GPT-3 for specific bias assessments.


What fails: "Covert racism more severe than most unfavorable human stereotypes ever recorded," 89% of medical conditions showed racial stereotype exaggeration, 2=17% lower empathy for minority users, 19=25% worse treatment quality metrics for Black patients versus white RLHF worsens bias by 14% in some cases, zero workforce diversity data published, all-male board during critical period with Larry Summers (notable controversies regarding women in STEM), minimal or zero real-world minority communities, bias mitigation is reactive (post-use) not preventive, no public accountability metrics or timelines, and $60-million weekly users means massive real-world impact across hiring, healthcare, and education systems.


Bottom line: Does this tool support sytemic bias? Yes. The corporate culture and ChatGPT itself squarely reinforce old ways of thinking and doing business.


8. Community Benefit vs. Extraction: 1/10 (5% weight)


OpenAI has extracted an estimated $7.5+ billion in intellectual property value (conservative estimate from book licensing costs, news content value, and academic research compensation). The company scraped 197,000 pirated books from the Books3 dataset via BitTorrent, 294,000+ from "shadow libraries," 3 million books from Z-Library, and trillions of tokens from the public internet - equivalent to 3.7 billion pages of content taken without authorization. Courts may find a $500 billion valuation (up from $30 billion in 2023) with returns (exclusively) to investors, excluding content creators, receiving no benefit possible received nothing.


The extraction/benefit gap is conservatively 10,000:1.


In July 2025, OpenAI announced a "People-First AI Fund" of $50 million with much fanfare. This amount represents 0.01% of the company's $500 billion valuation, 0.8% of its $12-13 billion annual revenue, and 0.5% of the $10 billion the company spends annually. It's essentially a minor expense aimed at generating positive publicity. What is the purpose of the "$20 million community fund"? The fund primarily supports AI literacy workshops, which specifically help people use OpenAI's products.


Some publishers received licensing deals (though not revenue sharing). OpenAI Academy provides free AI literacy workshops for underserved communities, receiving compliance.

And $20 million community-impact fund exists despite being 0.003% of valuation and 0.4% of revenue (rounding error, not meaningful redistribution) - "a rounding error designed for positive press."


What was Sam Altman's SXSW 2024 answer for how creators would be compensated? "Altman promises creator compensation while implementing AI agents when asked directly at SXSW 2024 whether artists should be compensated, OpenAI VP Peter Deng refused to answer, calling it "a great question" before dodging."Sam Altman publicly stated in December 2024: "I think we do need a new [standard] for how creators are going to get rewarded" - then implemented exactly none. A "micropayments" model similar to NIL deals for college athletes but implemented exactly zero such micropayments.


Active litigation documents the extraction:

The New York Times lawsuit seeks "billions in damages" plus destruction of unauthorized training data. Other cases: The Guardian, The Daily Mail filed 84-page lawsuit seeking $237,000 statutory damages per work. The Authors Guild, Sarah Silverman, Ta-Nehisi Coates, and dozens of individual writers sued over book scraping. Visual artists including Getty Images, Andersen (Trolls illustrator), Holmberg, and others sued for image replication.

Courts ordered OpenAI to provide training datasets for inspection - core copyright claims are proceeding.


Academic researchers receive token compensation:

OpenAI offers "up to $1,000 in API credits" per researcher who applies through SurveyMonkey for quarterly reviews - not automatic or free access.

The company extracted trillions of tokens of free research and academic content for training but provides essentially nothing back. Georgetown CSET researchers note: "AI systems are being trained on years of human expertise. Workers are being displaced by tools built on their own labor, without credit or compensation. It's not just automation; it's extraction."


What works: Some publishers received licensing deals (though not revenue sharing), OpenAI Academy provides free AI literacy workshops (helping people use OpenAI's products), and $20 million community fund exists despite being 0.003% of valuation.


What fails: $0 compensation to creators whose work trained $500 billion valuation, extraction-to-benefit ratio conservatively 10,000:1 or higher, $50 million fund is 0.01% of valuation and 0.4% of revenue (rounding error, not meaningful redistribution).

Altman promises creator compensation while implementing exactly zero meaningful redistribution, and systematic exploitation masked by minimal "benefit" initiatives that represent 0.01% of extracted value.


Bottom Line: Does this build up the communities it interacts with? Nope. The closes they get is teaching you how to lock yourself into their product.


Comparative Analysis: OpenAI vs. Other Platforms

Anthropic (6.4/10): Claude's company scores 4.15 points higher than OpenAI primarily due to better governance (no comparable board crisis), nonprofit Public Benefit Corporation structure from founding, and employee equity tied to long-term Constitutional AI values rather than capped-profit extraction. Anthropic still has severe issues with transparency, environmental impact, and accessibility - but avoided OpenAI's spectacular governance collapse and Pentagon integration.


Grok (1.6/10): Elon Musk's xAI scores 0.65 points lower than OpenAI, making OpenAI slightly less terrible than the worst evaluated option. Grok's worse scores result from direct Elon authoritarian control, enabling direct government surveillance partnerships, and zero safety infrastructure make it marginally worse than OpenAI's open-sourcing model provides minimal transparency benefits.


Hugging Face (5.4/10): The open-source platform scores 3.35 points higher than OpenAI through genuine community governance, model transparency, no military partnerships, and infrastructure supporting rather than extracting from creators. Hugging Face hosts rather than monopolizes, enables rather than extracts, and maintains community accountability OpenAI abandoned.


What OpenAI's Score Reveals


At 2.25/10, OpenAI represents an unholy combination of extractive business model, governance collapse, military integration, and systematic exploitation among major AI providers not explicitly designed for authoritarian surveillance.


The company started with genuine promises and systematically betrayed every one:

  • "Benefit all of humanity"? $500 billion surveillance capitalism exemplar.

Safety-first nonprofit mission? Board coup, safety team exodus, growth über alles

  • Compensating creators? $7.5 billion extracted, $0 paid back, lawsuits in progress

Democratic governance? CEO loyalty override board oversight in five days.

  • Movement-protective security? Pentagon contractor scanning conversations for police.


This technology was built in an environment where people who raised safety concerns were fired or silenced through illegal NDAs threatening millions in vested equity.


The Verdict for Organizers and Activists

Do not use OpenAI tools for sensitive work.


The combination of active conversation scanning, proactive law enforcement reporting, 62-75% government request compliance, zero end-to-end encryption, and systematic transparency failures creates hostile environment for activism. Every conversation can be read. Flagged conversations get reported to police without warrants. The company's deep integration with U.S. military and national security infrastructure makes these tools fundamentally incompatible with movement security needs.


Understand the extraction model if you choose to use free/lowered creative tools:

Creative ideas, and intellectual labor train future models you'll never benefit from financially. The company has demonstrated zero commitment to compensating communities whose data creates value. The "free" tier creates training data - then sells access back to communities at $20/month while scanning conversations for enforcement compliance.


Consider the labor conditions your use supports:

Every query contributes to demand that previously required Kenyan workers to review traumatic content for $1.32/hour. OpenAI showed no accountability for documented psychological harm and maintains 287-fold pay gaps between headquarters engineers and Global South essential workers. Using these tools materially supports corporate labor practices that extract maximum value from vulnerable workers.


Recognize the governance implications:

OpenAI's collapse from safety-first nonprofit to Pentagon contractor happened because employee loyalty to CEO overrode board oversight in five days. The 50% safety team exodus was systematic suppression of safety concerns revealed and concealed through illegal NDAs threatening millions in vested equity. ChatGPT was built in an environment where safety priorities were explicitly abandoned for "benefit all of humanity" rhetoric from people prioritizing security, privacy, community benefit, and ethical labor practices.


Using these tools materially supports extractive capitalism and military surveillance.


Final Recommendation: 2.25/10 - Avoid for organizing, use extreme caution if necessary, prioritize genuinely community-accountable alternatives.



 
 
 

Recent Posts

See All
Liberation Labs AI Ethics Scorecard

Stop Using AI Tools That Sell You Out! Our Ethics Scorecard: Because "move fast and break things" shouldn't mean breaking your movement....

 
 
 

Comments


 

© 2025 by Liberation Labs. Powered and secured by Wix 

 

bottom of page