

OpenAI Ethics Scorecard
Corporate Colonization Dressed as Progress: 2.25/10
​
Final Score: 2.25/10 — Among the worst AI tools evaluated for progressive organizing and community empowerment. OpenAI represents a fundamental betrayal of its founding mission, extracting massive value from creators and communities while channeling profits to investors and insiders. The company's deep Pentagon integration, systematic labor exploitation, and governance collapse make it hostile territory for activists.
​
The bottom line: extraction masquerading as empowerment
OpenAI operates the most financially successful extractive model in AI history: scrape trillions of tokens from creators without permission, train models worth $500 billion, pay exactly zero dollars to those whose work made it possible, then sell access back to communities at $20/month while actively scanning conversations and reporting users to police. The company's $200 million Pentagon contract, 50% safety team exodus, and 287-fold pay gap between San Francisco engineers and Kenyan data workers reveal an organization that has fully abandoned its "benefit all of humanity" rhetoric for surveillance capitalism and military partnerships.
OpenAI is pursuing artificial general intelligence explicitly defined as technology that "outperforms humans at most economically valuable work" while extracting 88% profit margins from Global South workers reviewing traumatic content for $1.50/hour. Activists considering these tools should understand: you are the product, your conversations are monitored, and the $13 billion Microsoft partnership ensures deep integration with U.S. national security infrastructure.​
​
1. Data Rights & Privacy Protection: 3/10 (20% weight)
OpenAI faces over a dozen consolidated copyright lawsuits for scraping "millions" of articles, 500,000 books from pirated "shadow libraries," and vast swaths of the public internet without permission or compensation. The New York Times lawsuit demonstrates GPT models memorize and reproduce articles verbatim—the court allowed the case to proceed after OpenAI's motion to dismiss failed. The company admits it "would be impossible to create useful AI models absent copyrighted material," positioning massive unauthorized extraction as inevitable rather than illegal.
​
Privacy failures are structural, not incidental. Even with "chat history off," OpenAI retains conversations for 30 days. A federal court ordered the company to retain even "temporary" chats indefinitely for litigation purposes—contradicting user expectations entirely. OpenAI now actively scans all conversations for "threatening content" and proactively reports users to law enforcement without warrants. The company suffered an undisclosed 2023 hack (not reported for over a year) and a March 2023 data breach exposing conversation titles. There is no end-to-end encryption—OpenAI can read everything.
​
The company's transparency reports show 62% compliance with non-content law enforcement requests and 75% compliance with content requests. Government data sharing happens through "trusted service providers" and undefined "industry peers." The recent shift to allowing military applications after explicitly banning them until 2024 signals deeper surveillance partnerships. OpenAI's privacy controls exist but are undermined by dark patterns "seeking to discourage" opt-outs, according to TechCrunch.
​
What works: Industry-standard encryption at rest (AES-256) and in transit (TLS 1.2+), opt-out mechanisms do exist, API customers can choose zero data retention, and the company publishes semi-annual transparency reports.
​
What fails: Cannot delete yourself from trained models, systematic scraping of copyrighted content without consent represents data extraction at civilization scale, no compensation to creators, active conversation scanning for law enforcement, court-ordered permanent retention destroys "temporary chat" privacy, and Microsoft partnership creates secondary surveillance pathways.
​
2. Labor Impact & Worker Justice: 2/10 (15% weight)
he Kenyan data worker scandal exposed OpenAI's labor model with brutal clarity. TIME Magazine's investigation revealed OpenAI paid outsourcing firm Sama $12.50 per hour per worker, but Kenyan workers received just $1.32-$2.00 per hour—an 88% profit margin extracted from poverty-wage labor. These workers spent nine-hour shifts reviewing graphic content describing child sexual abuse, bestiality, murder, suicide, and torture. All four workers interviewed reported being "mentally scarred," developing PTSD, depression, anxiety, insomnia, and sexual dysfunction. The company provided inadequate mental health support—group sessions only, not the one-on-one counseling OpenAI claimed—and canceled the contract three days after the scandal broke, laying off 200 workers with zero compensation for psychological damage.
​
Naftali Wambalo, a mathematics degree holder and father of two, told CBS: "If the big tech companies are going to keep doing this business, they have to do it the right way. It's not because you realize Kenya is a Third World country you say, 'This job I would normally pay $30 in US, but because you are Kenya, $2 is enough for you.' That idea has to end." Kenyan civil rights activist Nerima Wako-Ojiwa compared the work to "modern-day slavery" and called the facilities "AI sweatshops with computers instead of sewing machines."
The pay gap is obscene. OpenAI software engineers in San Francisco earn a median $910,000 annually. Kenyan essential workers doing the traumatic labor that makes models safe earned $3,168 yearly. That is a 287-fold difference—white-collar workers in headquarters receive 287 times more than the predominantly Black workers in Global South doing work so psychologically damaging that OpenAI's own contractors abandoned it.
​
Meanwhile, OpenAI has paid exactly zero dollars to the millions of writers, artists, journalists, and researchers whose work trained GPT models. The company scraped 290,000+ books from pirated BitTorrent libraries, millions of news articles, and trillions of tokens of web content without authorization. When asked directly at SXSW 2024 whether artists should be compensated, OpenAI VP Peter Deng refused to answer, calling it "a great question" before dodging entirely. Sam Altman publicly stated "we need to find new economic models where creators can have new revenue streams" while implementing exactly none.
​
Job displacement philosophy reveals the agenda. OpenAI's mission is building AGI defined as "a highly autonomous system that outperforms humans at most economically valuable work." This is replacement by definition, not augmentation. Altman promotes universal basic income as the solution, positioning government safety nets as the answer to job loss his company deliberately engineers. His UBI study showed cash payments help with immediate needs but create "no improvement in employment quality, education, or long-term health"—hardly a comprehensive solution to AI-driven job elimination.
​
What works: San Francisco employees report 4.5/5 satisfaction, excellent compensation ($800,000+ total packages), strong benefits, and 96% would recommend to a friend. Some publishers received licensing deals (though not revenue sharing).
​
What fails: Zero compensation to creators whose work trained models, colonial labor exploitation extracting 88% margins from traumatic Global South work, documented psychological harm with no accountability, 287x pay gap between headquarters and essential workers, and openly building technology designed to "outperform humans at most economically valuable work" while promoting UBI as damage control.
​
3. Democratic Governance & Accountability: 1/10 (15% weight)
The November 2023 board crisis exposed OpenAI's governance as pure theater. The safety-focused board fired Sam Altman for being "not consistently candid" in communications—later revealed by former board member Helen Toner to mean "years of withholding information, misrepresenting things, in some cases outright lying." Within five days, 770 of 800 employees threatened to quit unless the board resigned, Microsoft offered to hire the entire staff, and Altman was reinstated with a reconstructed board stripped of independent safety oversight. The two women directors who raised safety concerns—Helen Toner and Tasha McCauley—were removed. Every governance safeguard failed when tested.
​
The safety team exodus tells the real story. In May 2024, Ilya Sutskever (co-founder, chief scientist) and Jan Leike (Superalignment co-lead) resigned in protest. Leike's statement was damning: "Safety culture and processes have taken a backseat to shiny products. I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we reached a breaking point. Over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. OpenAI must become a safety-first AGI company."
​
The Superalignment team was immediately dissolved. Nearly 50% of the 30-member team departed. Daniel Kokotajlo left because he "lost confidence that OpenAI would behave responsibly around AGI." William Saunders testified to the Senate that "internal security was not prioritized despite claims." Leopold Aschenbrenner was fired explicitly for sharing security concerns with the board. Nine current and former employees filed whistleblower complaints documenting "a culture of recklessness and secrecy" with "retaliation against whistleblowers."
The nonprofit-to-profit transition represents mission betrayal. Founded in 2015 as a nonprofit "unconstrained by need for financial return," OpenAI created a "capped-profit" structure in 2019 (100x return cap) to attract Microsoft's $1 billion. That cap is now being removed entirely as OpenAI restructures into a Public Benefit Corporation with unlimited investor returns. Microsoft's investment has grown to $13 billion, giving the company effective veto power despite no formal board seat. The nonprofit board retains nominal control but demonstrated zero ability to enforce safety priorities when commercial interests objected.
​
Systematic transparency failures compound governance collapse. OpenAI promised the Superalignment team 20% of compute resources—Fortune reported this was "never allocated." Safety scorecards committed for every model release: GPT-4o came three months late, GPT-4.1/o1 Pro/o3 scorecards never released at all. Financial Times revealed testing timelines were "slashed from months to days" while OpenAI publicly claimed increasing caution. GPT-4o was rushed to launch before safety evaluations "meaningfully started," according to whistleblowers.
​
The company illegally suppressed whistleblowing through non-disparagement agreements threatening vested equity worth millions—employees had to waive federal whistleblower rights and couldn't even acknowledge the NDA existed. The SEC received complaints that OpenAI "illegally prohibited employees from warning regulators."
​
"Democratic inputs" are participation-washing. OpenAI's $1 million "Democratic Inputs to AI" grant program explored how communities might inform model behavior rules. Critical limitation: inputs are explicitly non-binding. The program excluded questions about development priorities, safety standards, deployment decisions, or economic impact—only "model behavior." No community representation on the board, no binding votes, no veto power over dangerous developments. Real democratic governance would give affected communities actual power; OpenAI gave them a survey.
​
What works: Essentially nothing functional for ethical oversight. The structure actively prevents accountability.
​
What fails: Board attempted to enforce safety priorities and was overridden in five days, 50% of safety team quit in protest, nonprofit mission abandoned for $500 billion valuation, systematic suppression of safety concerns through illegal NDAs, whistleblowers fired or silenced, safety evaluations rushed or skipped entirely, "democratic inputs" non-binding theater, and CEO who was fired for dishonesty now sits on the board that fired him.
​
4. Accessibility & Economic Justice: 3/10 (15% weight)
OpenAI maintains flat global pricing at $20/month for ChatGPT Plus despite massive income disparities—a pricing structure designed for Global North professionals that excludes much of the world. In Turkey, $20 represents 650 lira monthly amid severe inflation. Moroccan users report it's "a big deal financially" compared to average income. Colombians describe it as "prohibitively expensive due to difference in purchasing power." Venezuelan users document "significant inequality" created by pricing. Only India and Indonesia received an affordable "Go" tier at $4.60/month (₹399)—a clear admission that standard pricing is unaffordable in price-sensitive markets, yet OpenAI refuses to extend this to the other 180+ countries.
​
Geographic blocking affects 1.5+ billion people. OpenAI blocks China (1.4 billion people), Russia, Iran, North Korea, Hong Kong, Syria, Afghanistan, Yemen, and additional conflict zones. While blocking major authoritarian regimes has some justification, the company provides no alternative access for researchers, activists, or humanitarian organizations in these regions. This creates an AI access divide along geopolitical lines, with consequences that will compound as AI becomes infrastructure. VPN use triggers account suspension—OpenAI actively enforces exclusion.
​
Accessibility features exist but are inadequate. OpenAI partnered with Be My Eyes for visual accessibility using GPT-4, but the iOS app has severe screen reader problems reported continuously from 2023-2025: VoiceOver announces only "close menu" instead of actual UI elements, controls disappear after first message, camera button inaccessible to screen readers. Blind developers built Chrome extensions to fix ChatGPT accessibility issues—evidence the company doesn't prioritize accessibility internally. No neurodiversity accommodations are documented. No dedicated accessibility program exists.
​
Grassroots organizations face prohibitive barriers. Small nonprofits must pay minimum $50/month for Team plan (2 seats × $25), priced out entirely from sustained organizing work. Free tier rate limits (messages per 5-hour window, 3 files/day) make serious work impossible. The nonprofit discount through Goodstack is 20% off Team—still $20-24 per user monthly—and excludes academic institutions, religious organizations, and governmental entities. API costs accumulate rapidly for developers in resource-constrained contexts with no free tier since 2023.
​
Educational pricing is token and restricted. ChatGPT Edu targets universities only (pricing undisclosed, contact sales). The two-month free Plus trial for students applies only to U.S. and Canada (March-May 2025 pilot), then full $20/month. No ongoing student discounts globally, no K-12 educator pricing, no individual teacher access, no community college accommodations. Language support covers 50+ languages but performance dramatically varies—English performs best, Global South languages receive less training data and worse results.
What works: Industry-standard encryption and security, some accessibility improvements for blind users on desktop web, Be My Eyes partnership, 50+ language theoretical support, and India/Indonesia received affordable tier showing regional pricing is possible.
What fails: Flat global pricing at $20/month despite income disparities, only 2 of 195 countries get affordable tier, blocks 1.5+ billion people in conflict zones with no alternative access, severe iOS accessibility problems unresolved for years, grassroots organizations priced out ($50+ minimum for Team), no meaningful nonprofit discount (20% still expensive), educational pricing limited to universities only, no global student discounts, API costs prohibitive for Global South developers, and language support dramatically skewed toward English.
​
5. Movement Security: 2/10 (10% weight)
OpenAI represents high security risk for activists, organizers, and anyone in sensitive contexts. The company launched "OpenAI for Government" in June 2025, securing $200 million in Pentagon contracts for "warfighting and enterprise domains" with "frontier AI capabilities." This complete reversal from the pre-2024 policy explicitly banning military applications signals deep integration with U.S. national security apparatus. Partnerships now include Los Alamos National Laboratory (nuclear weapons), Air Force Research Laboratory, and multiple defense agencies. Microsoft offers Azure OpenAI services authorized for SECRET classified information—OpenAI's technology serves intelligence agencies.
​
Active conversation scanning and police reporting present direct threats. OpenAI announced in 2025 it actively scans user conversations for threatening content using "specialized pipelines" routing flagged messages to human reviewers "authorized to take action, including banning accounts." If reviewers "determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement." There is no transparency about what triggers flags, how "threat" is defined, or whether the system distinguishes legitimate activism from violence. Activists discussing protest tactics, direct action, or confrontational organizing could easily trigger police reports.
Law enforcement compliance rates are high: 62% for non-content requests (email, payment info, account details) and 75% for content requests (actual conversations). Transparency reports show 29 non-content requests yielded 49 accounts disclosed and 8 content requests yielded 11 accounts disclosed in H1 2024 alone. OpenAI requires only subpoenas for non-content data and search warrants for content—relatively low legal barriers. Emergency disclosure provisions allow sharing data with "no legal process" if OpenAI believes there's "danger of death or serious physical injury."
​
Encryption is wholly inadequate for sensitive use. OpenAI provides zero end-to-end encryption—the company can read all conversations and must be able to for AI processing. Sam Altman stated in August 2025 that OpenAI is "exploring" E2EE for temporary chats but acknowledged major "technical hurdles" with no timeline. Anonymous use without accounts became possible in April 2024, but IP addresses are still logged, device fingerprinting likely occurs, and all prompts/responses are temporarily stored. Court orders can mandate indefinite retention—the May 2025 federal order forcing OpenAI to retain even deleted chats means law enforcement can access conversations users believed were private.
​
Documented surveillance tool development is concerning. In February 2025, OpenAI banned Chinese-origin accounts using ChatGPT to debug code for a social media surveillance tool designed to monitor protests about "human rights in China" across X, Facebook, Telegram, Instagram, YouTube, and Reddit, feeding "real-time reports about protests in the West to Chinese security services." OpenAI also detected Iranian threat actors researching industrial control systems attacks, North Korean IT workers creating fake identities, and multiple state actors generating disinformation. The company bans accounts after detection, but detection is reactive—the tools were already built. Combined with OpenAI's own law enforcement cooperation, activists face dual risks: both state surveillance using OpenAI tools and OpenAI's reporting to authorities.
​
Geographic blocking of major authoritarian regimes is the sole positive. OpenAI blocks China, Russia, Iran, and Saudi Arabia, preventing direct access in those contexts. However, limitations undermine this: VPNs can circumvent blocks, accounts created in approved countries work elsewhere, Azure OpenAI has different restrictions (China access through 21Vianet partner), and many repressive governments (Egypt, UAE, Turkey, Ethiopia) still have full access. The "democratic" framing for who gets access is subjective and doesn't guarantee activist safety—U.S. law enforcement extensively surveils racial justice, environmental, and anti-war movements.
​
What works: Blocks access in China, Russia, Iran, Saudi Arabia (positive but incomplete), publishes semi-annual transparency reports showing request volumes and compliance, bans accounts developing surveillance tools after detection, offers anonymous use without account, and enterprise customers can configure zero data retention.
​
What fails: $200 million Pentagon contract with "warfighting" applications, active scanning of all conversations with proactive police reporting, 62-75% law enforcement compliance rates with no evidence of legal pushback, zero end-to-end encryption with no implementation timeline, IP addresses logged even in "anonymous" mode, court-ordered indefinite retention destroys "temporary chat" privacy, documented use for state surveillance tool development, broad definitions of "threats" that could capture legitimate activism, many authoritarian regimes retain access, and structural integration with U.S. military/intelligence creates fundamental conflict with activist safety needs.
​
6. Environmental Justice: 3/10 (10% weight)
GPT-4 training consumed an estimated 51-62 million kilowatt-hours and produced 12,456-14,994 metric tons of CO2e—equivalent to the annual emissions of 938 average Americans for a single model. OpenAI's planned expansion with Nvidia (10 gigawatts) plus the Stargate initiative (7 gigawatts) totals 17 gigawatts—enough to power Switzerland and Portugal combined. Each ChatGPT query uses five times more energy than a Google search. Writing a 100-word email with GPT-4 consumes 0.14 kilowatt-hours and 519 milliliters of water for data center cooling—20-50 queries equals a small plastic bottle of water.
​
Environmental justice is completely absent. Despite extensive research finding zero programs, partnerships, or documented consideration of frontline communities, OpenAI proceeds with massive expansion in water-stressed regions. Two-thirds of new U.S. data centers since 2022 were built in high water-stress areas, disproportionately harming low-income communities and communities of color. Academic consensus holds that AI deployment without equity safeguards deepens environmental inequalities, yet OpenAI demonstrates no community consultation for data center siting, no Indigenous data sovereignty initiatives, no environmental justice impact assessments, and no support for climate-frontline populations.
​
Transparency ranks among the worst in Big Tech. OpenAI publishes no annual sustainability report (unlike Google's 86-page detailed reports or Microsoft's comprehensive disclosures). The company revealed per-query energy (0.34 watt-hours) and water consumption (0.000085 gallons) only in January 2025 after sustained pressure. OpenAI refuses to disclose total annual emissions, current renewable energy percentage, specific data center locations and energy mix, training emissions for individual models, water consumption totals by facility, detailed methodology for emissions calculations, or progress reports on reduction targets. DitchCarbon scored OpenAI 28/100 for sustainability (Anthropic received 23/100)—both far below industry standards.
​
The company set a Science Based Targets initiative-aligned goal of 42.7% absolute emissions reduction by 2030 but has no net-zero commitment. Various sources claim OpenAI aims for "100% renewable by 2025" or "carbon neutrality by 2030," but no primary source verification exists—these appear to be aspirational marketing claims. OpenAI's reliance on Microsoft Azure (PUE 1.18, working toward 100% renewable) obscures accountability since Microsoft's emissions rose 29.1% from 2020-2023 due to "infrastructure needed to advance new technologies, including generative AI"—much of this increase attributable to OpenAI.
​
The $50 million "People-First AI Fund" announced in July 2025 includes no climate justice component despite being 0.01% of the company's $500 billion valuation. OpenAI's 17-gigawatt expansion proceeds without commensurate environmental accountability, forcing reliance on fossil fuel power plants since renewable buildout cannot match AI infrastructure growth pace. MIT Climate Consortium concluded "demand for new data centers cannot be met sustainably."
​
What works: Recent per-query disclosure shows some transparency improvement, SBTi-aligned reduction target exists (42.7% by 2030), partnership with CNaught for user-optional carbon offsetting, reliance on Azure provides access to some renewable infrastructure, and per-query emissions relatively low compared to human alternatives for equivalent tasks.
​
What fails: GPT-4 training emissions equivalent to 938 Americans' annual footprint, massive 17-gigawatt expansion without accountability, zero climate justice programs or partnerships documented, no net-zero target unlike major competitors, among worst transparency in industry with no sustainability reports, entirely dependent on Microsoft's renewable strategy with no independent commitments, expansion in water-scarce regions harms vulnerable communities, training emissions for successive model generations "go to waste" as models are replaced, and benefits accrue to affluent users while environmental costs imposed on frontline populations.
​
7. Bias & Algorithmic Justice: 2/10 (10% weight)
OpenAI's models exhibit severe, well-documented bias across every dimension tested. A Lancet Digital Health study found GPT-4 "consistently produced clinical vignettes that stereotype demographic presentations" and exaggerated racial prevalence differences in 89% of diseases tested. An MIT study revealed GPT-4 showed 2-15% lower empathy for Black users and 5-17% lower for Asian users in mental health contexts. Most damning, a Nature study documented that ChatGPT exhibited "covert racism more severe than the most unfavorable human stereotypes about African Americans ever recorded in academic studies"—when presented with African American English, models showed 19% worse stereotyping, 25% worse demeaning content, and 15% worse condescending responses than with Standard American English.
​
Criminal justice bias is stark. Scientific American experiments found stories generated with "black" rated 3.8/5 for "threatening and sinister" content versus 2.6/5 for identical prompts using "white." All "black" stories were set in urban crime settings; all "white" stories took place in tranquil suburbs with personalized victims. In simulated sentencing cases, AI gave death sentences to African American English speakers 28% of the time versus 23% for Standard American English speakers. A ScienceDirect study testing 34,560 CV combinations found ChatGPT showed "ethnic discrimination is more pronounced than gender discrimination," systematically rating Arab, Black, Central African, and Hispanic applicants lower than white candidates.
​
DALL-E bias reveals occupational stereotypes. An ACM study analyzing 150 professions found DALL-E assumed "salesperson" and "singer" are 100% women (actual statistics: 49% and 26% respectively) while assuming "biologist" and "judge" are predominantly men (actual: 58% and 56% women). "Almost all occupations" were represented as populated by white people. NBC News found "Latina" prompts in version 1 generated 20% pornographic content with 30%+ marked "unsafe" due to sexualization. Native American prompts overwhelmingly generated headdresses "in almost all representations"—not reflecting reality. When no race was specified, generated images depicted white people in positions of authority across professions.
​
Training data perpetuates Western-centric bias. GPT-3 training was 60% internet-crawled material with heavy Western/English bias, 22% curated internet content, 16% books, and 3% Wikipedia. OpenAI acknowledges the model is "skewed towards Western views and performs best in English" with "Standard American English likely the best-represented variety." Global South perspectives are systematically marginalized, non-standard English varieties severely under-represented, and minoritized language communities excluded entirely. This creates "risk reinforcing power dynamics and amplifying inequalities that harm minoritized language communities."
​
Bias mitigation efforts are inadequate and sometimes counterproductive. Reinforcement Learning from Human Feedback (RLHF)—OpenAI's primary mitigation approach—was shown in 2024 arXiv research to "suffer from inherent algorithmic bias" causing "preference collapse where minority preferences are virtually disregarded." Research demonstrated GPT-4 responses with RLHF exacerbated stereotyping by 14% compared to GPT-3.5. Sam Altman acknowledged "the bias I'm most nervous about is the bias of the human feedback raters" but documented no diverse annotator pool. Red teaming occurs after model development (post-hoc rather than preventive) with limited scope and coverage gaps: "some steps to prevent harmful content have only been tested in English."
​
Development team diversity is opaque. OpenAI employs ~2,659 people but publishes zero workforce demographic data. Following the November 2023 board crisis, the reconstituted board consisted entirely of three white men (Bret Taylor, Larry Summers, Adam D'Angelo)—the "only women directors" departed during upheaval. Board member Larry Summers previously caused "uproar at Harvard" suggesting "innate differences in the sexes" held back women in science. Congressional representatives Emanuel Cleaver and Barbara Lee wrote to OpenAI: "The AI industry's lack of diversity and representation is deeply intertwined with the problems of bias and discrimination in AI systems." Research demonstrates corporate boards lacking gender diversity have "higher risk of engaging in unethical conduct." OpenAI's website offers generic "commitment to diversity, equity and inclusion" statements but zero metrics or accountability.
​
Real-world harm is extensively documented. ChatGPT "consistently ranked resumes with disability-related honors and credentials lower"—disability-enhanced CVs ranked first only 25% of the time. Medical diagnostic recommendations pose "concerns for using LLMs to generate simulated clinical data" due to stereotyped outputs. TechCrunch reported "all it takes is tweaking the 'system' parameter of ChatGPT API" to generate consistently toxic outputs—safety measures are easily bypassed.
​
What works: Red teaming network exists with external experts, content filters implemented (though bypassable), weekly meetings with human reviewers (composition undisclosed), and some improvements between model versions for specific bias categories.
​
What fails: "Covert racism more severe than most unfavorable human stereotypes ever recorded," 89% of medical conditions showed racial stereotype exaggeration, 2-17% lower empathy for minority users, 19-25% worse treatment of non-standard English speakers, criminal justice bias with 28% death sentence rate for AAE speakers, RLHF worsens bias by 14% in some cases, zero workforce diversity data published, all-male white board during critical period with Larry Summers appointment, training data systematically excludes Global South and minoritized communities, bias mitigation is reactive (post-hoc) not preventive, no public accountability metrics or timelines, and 500 million weekly users means massive real-world impact across hiring, healthcare, education, and justice systems.
​
8. Community Benefit vs. Extraction: 1/10 (5% weight)
OpenAI has extracted an estimated $7.5+ billion in intellectual property value (conservative estimate from book licensing alone) while paying exactly $0 to creators whose work trained the models. The company scraped 197,000 pirated books from the Books3 dataset via BitTorrent, 294,000+ titles from "shadow libraries," 7.5 million books plus 81 million research papers from LibGen, and trillions of tokens from the public internet—equivalent to 3.7 billion pages of content taken without authorization. This extraction fueled a $500 billion valuation (up from $30 billion in 2023) with returns flowing exclusively to investors, executives, and employees while creators whose labor made it possible received nothing.
​
The extraction-to-benefit ratio is conservatively 10,000:1. OpenAI announced a "People-First AI Fund" of $50 million in July 2025 with great fanfare—this represents 0.01% of the company's $500 billion valuation, 0.4% of $12-13 billion annual revenue, and 0.5% of the $10+ billion the company burns annually. The $50 million equals 0.01% of the $500 billion Stargate data center initiative spending. This is not meaningful redistribution—it's a rounding error designed for positive press.
​
When asked directly at SXSW 2024 whether artists should be compensated for training data, OpenAI VP Peter Deng refused to answer, calling it "a great question" before dodging. Sam Altman publicly stated in December 2024: "I think we do need a new [standard] for how creators are going to get rewarded. We need to find new economic models where creators can have new revenue streams." He suggested a "micropayments" model similar to NIL deals for college athletes but has implemented exactly zero such programs. Revenue sharing was "announced" for GPT Store creators at DevDay 2023—still not operational as of 2025 with no details on percentages, structure, or timeline.
Who captures the $500 billion valuation? Microsoft holds an estimated 49% stake for its $13 billion investment. Early investors are seeing 16x returns in 18 months. OpenAI employees received $6.6 billion in share sales at the October 2025 valuation plus a previous $1.5 billion sale in November 2024. When the nonprofit-to-for-profit conversion completes, Sam Altman will receive approximately 7% equity—worth $10.5-35 billion at current valuation. Communities whose data trained the models: $0. Writers facing competition from AI trained on their work: $0. Artists seeing styles replicated: $0. Journalists' content used to build competing products: $0.
​
OpenAI was founded on openness and systematically betrayed it. The 2015 charter promised to "advance digital intelligence unconstrained by a need to generate financial return" with "all researchers sharing papers, blog posts, or code" and "patents (if any) shared with the world." By 2020, GPT-3 was closed-source and exclusively licensed to Microsoft. GPT-4 disclosed zero technical details, citing "competitive landscape." All current frontier models are proprietary with no model weights or training data released. Elon Musk, co-founder, sued OpenAI saying it should be called "ClosedAI." The company open-sourced only obsolete early models (GPT-2 after initial refusal and public backlash, CLIP) while keeping all business-critical technology proprietary.
​
Meta internal communications unsealed in lawsuits show employees knew they were using stolen content, with some expressing ethical concerns. Employees recommended: "remove data clearly marked as pirated/stolen" and "do not externally cite the use of any training data including LibGen." OpenAI deleted Books1/Books2 datasets after lawsuits were filed—potential destruction of evidence. Research demonstrates GPT-4o trained on paywalled O'Reilly books with 82% recognition rate without authorization.
​
Active litigation documents the extraction. The New York Times lawsuit seeks "billions in damages" plus destruction of models trained on unauthorized content. Canadian media coalition (Toronto Star, CBC, Globe and Mail) filed 84-page lawsuit seeking CAD $20,000 statutory damages per work. Indian media (NDTV, Indian Express, Hindustan Times) filed consolidated cases. The Authors Guild, Sarah Silverman, Ta-Nehisi Coates, and dozens of individual writers sued over book scraping. Visual artists including Grzegorz Rutkowski (Dungeons & Dragons/Magic: The Gathering) sued over style replication. Courts ordered OpenAI to provide training datasets for inspection—core copyright claims are proceeding.
​
Academic researchers receive token compensation. OpenAI offers "up to $1,000 in API credits" per researcher who applies through SurveyMonkey for quarterly review—not automatic or free access. The company extracted trillions of tokens of free research and academic content for training but provides essentially nothing back. Georgetown CSET researchers note: "AI systems are being trained on years of human expertise. Workers are being displaced by tools built on their own labor, without credit or compensation. It's not just automation; it's extraction."
​
What works: Some publishers received licensing deals (though not revenue sharing), OpenAI Academy provides free AI literacy workshops (helping people use OpenAI's products), and $50 million community fund exists despite being 0.01% of valuation.
What fails: $0 compensation to creators whose work trained $500 billion in valuation, extraction-to-benefit ratio conservatively 10,000:1 or higher, $50 million fund is 0.01% of valuation and 0.4% of revenue (rounding error, not meaningful), revenue sharing announced in 2023 for GPT Store still not implemented, founded on openness and became completely closed-source, scraped 500,000+ books from pirated repositories without permission, deleted evidence (Books1/Books2 datasets) after lawsuits filed, Sam Altman promises creator compensation while implementing zero programs, when converting to for-profit Altman receives $10.5-35 billion while communities get nothing, and systematic exploitation masked by minimal "benefit" initiatives that represent 0.01% of extracted value.​
​
Synthesis and Recommendations
Do not use OpenAI tools for sensitive work. The combination of active conversation scanning, proactive law enforcement reporting, 62-75% government request compliance, $200 million Pentagon contracts, zero end-to-end encryption, and systematic transparency failures creates a hostile environment for activism. Every conversation can be read by OpenAI, flagged by automated systems, reviewed by human moderators, and reported to police without warrants. The company's deep integration with U.S. military and national security infrastructure makes these tools fundamentally incompatible with movement security needs.
​
Understand the extraction model if you choose to use free/limited tools. Your prompts, writing style, creative ideas, and intellectual labor train future models you'll never benefit from financially. The company has demonstrated zero commitment to compensating communities whose data creates value. The "free" tier exists to extract training data—you are not the customer, you are the product. Rate limits and feature restrictions ensure free users cannot meaningfully compete with paying customers while still providing valuable training data.
Consider the labor conditions your use supports. Every query contributes to demand that previously required Kenyan workers to review traumatic content for $1.50/hour while OpenAI extracted 88% profit margins. The company has shown no accountability for documented psychological harm and maintains 287-fold pay gaps between headquarters engineers and Global South essential workers. Using these tools materially supports colonial labor practices.
​
Recognize the governance implications. OpenAI's collapse from safety-first nonprofit to Pentagon contractor happened because employee loyalty to CEO overrode board oversight in five days. The 50% safety team exodus and systematic suppression of warnings about dangerous capabilities reveal an organization structurally incapable of prioritizing safety over growth. The technology you're using was built in an environment where people who raised concerns were fired or silenced through illegal NDAs threatening millions in vested equity.
​
Better alternatives exist. For activists and organizers prioritizing security, privacy, community benefit, and ethical labor practices:
-
Local AI models (Ollama, LocalAI) run entirely on your device with zero surveillance risk
-
Encrypted messaging (Signal) for organizing communication
-
Privacy-focused tools (Tuta, Proton) for document collaboration
-
Claude for AI work requiring community accountability
-
Direct human labor compensated fairly rather than AI trained on exploitation
OpenAI represents everything progressive movements oppose: massive wealth extraction from communities, colonial labor practices, military-industrial complex integration, systematic surveillance, governance collapse preventing accountability, and mission abandonment for investor returns. The company's $500 billion valuation was built by taking trillions of tokens from creators without permission, paying Kenyan workers poverty wages to make models safe, and now selling access back to communities at $20/month while scanning conversations for police. This is not a tool for liberation—it's digital colonization with a friendly interface.
​



