Grok Scored 1.6/10 On Our Ethics Scorecard. That's Not a Bug, It's Elon Musk's Entire Business Model.
- Thomas Edrington
- Oct 8
- 12 min read
We ran xAI's Grok through Liberation Labs' AI Ethics Scorecard. The results confirm what organizers already suspected: this platform is designed to harm prosocial movements, not help them. Final Score: 1.6/10 - Avoid actively harmful to communities, users, and probably the social fabric of humanity.
Look, we didn't want Grok to fail. We applied the same rigorous methodology we use for every platform: systematic research, evidence-based scoring across eight weighted categories, complete source citations so you can verify everything yourself. But when a billionaire builds an AI specifically designed to ignore criticism of himself and Trump, trains it on stolen creative labor, runs it on unpermitted gas turbines in a Black neighborhood, and integrates it directly with Pentagon surveillance infrastructure?
Yeah. The evidence speaks for itself. And it's damning.
Why This Actually Matters
Here's what we're dealing with: Elon Musk built an AI that explicitly protects his ego and political allies, trained it by stealing from millions of creative workers, powers it by poisoning Black communities in Memphis, and sold surveillance access to the Pentagon. This isn't a tech company making questionable choices. This is oligarchy as infrastructure. And organizers need to understand exactly why this platform represents everything we're fighting against. The facts aren't complicated. They're just ugly.
Category 1: Data Rights & Privacy Protection Score: 2/10 (Weight: 20%)
Your Data Is Training Pentagon AI Whether You Consented or Not
Grok operates on default opt-in enrollment. If you're an X user, your posts are training AI unless you manually opt out - a process that requires technical knowledge most users don't have. Over 60 million European users had their data processed without proper GDPR consent, leading to formal regulatory complaints. But the EU violations are just the tip of the iceberg.
Critical Security Failures That Expose Activists
Security researcher Johann Rehberger documented devastating vulnerabilities:
Prompt injection attacks that expose user identities based on political views
Data exfiltration enabling unauthorized access to chat histories
ASCII smuggling for invisible malicious instructions
The worst part? 370,000 private conversations were accidentally exposed due to poor system design. When your security is this bad, it doesn't matter what your privacy policy says. The data's getting out anyway.
Direct Government Surveillance Access
Here's where it gets really ugly:
"Grok for Government" program launched July 2024. $200 million Pentagon contract. Integration with Department of Government Efficiency (yes, that's a real thing now).
xAI's privacy policy commits to sharing data "to comply with laws or lawful requests" - with no transparency reports, no warrant canaries, no institutional resistance to government overreach. When the cops come knocking, xAI opens the door and hands over your data. Privacy Act violations may occur when processing sensitive personal data from federal databases without proper legal authority. But who's going to enforce that when the company's already partnered with the Pentagon?
Inadequate Data Protection
While Grok offers basic privacy controls like conversation deletion and opt-out mechanisms, these require technical knowledge and proactive user action. The 30-day deletion timeline provides some protection, but data may be retained indefinitely in "de-identified" form for training purposes.
Bottom line: Can the cops get your organizing data? They probably already have it.
Category 2: Labor Impact & Worker Justice Score: 2/10 (Weight: 15%)
Stealing Creative Labor at Scale
Here's the Grok business model in plain language:
Take posts from 60+ million X users without meaningful consent
Train AI on their creative labor
Sell subscriptions and government contracts
Give the people who created the value: nothing
That's not innovation. That's wage theft dressed up in tech-bro language. The opt-out system requires manual intervention and technical knowledge most users don't have. Meanwhile, xAI profits from every piece of content ever posted to X - without compensation, attribution, or even acknowledgment. Multiple GDPR complaints and legal challenges have been filed. The Virginia Law Review documented that "most AI firms are not compensating creative workers" whose work trains these systems. xAI isn't just participating in this exploitation - they're pioneering new ways to do it at scale.
Devastating Impact on Creative Workers
Research projects over 203,000 entertainment industry jobs will be displaced by 2026. 15,000+ prominent writers have signed demands for fair AI compensation. And what's xAI's response? They laid off 500+ data annotation workers - a third of their team - in September 2025 despite company growth. Promised pay through contract end, then eliminated the jobs anyway. The pattern is clear: extract maximum value, compensate minimum labor, automate away the rest. While "AI tutors" receive $45-100/hour compensation, this represents a tiny fraction of the workforce compared to unpaid content creators whose work enables the system.
Automation vs. Augmentation
The approach clearly favors replacement over enhancement. Studies show 75% of creative industry executives expect AI tools to eliminate or reduce jobs, with Grok's unconstrained output capabilities particularly threatening to writers, journalists, and content creators (arXiv)
Bottom line: Who got exploited to build this? Millions of content creators, with zero compensation.
Category 3: Democratic Governance & Accountability Score: 1/10 (Weight: 15%)
One Billionaire's Ego Project Masquerading as Public Infrastructure
xAI operates as an absolute autocracy under Elon Musk's complete control. No board oversight. No ethics committees. No community input. No democratic accountability whatsoever.
And it gets worse:
xAI was initially incorporated as a Public Benefit Corporation - a legal structure that requires serving public good alongside profit. In May 2024, Musk secretly abandoned that status without public disclosure. The deception was so complete that even his own lawyers didn't know about the change until it came out in court filings. Think about what that reveals: when even the pretense of public benefit becomes inconvenient, Musk just... drops it. Quietly. And keeps operating as though nothing changed.
Explicit Political Bias Programmed Into the System
System prompts explicitly instruct Grok to minimize criticism of Elon Musk and Donald Trump while characterizing progressives as "woke."
Read that again: The AI is programmed to protect the ego of its billionaire creator and his political allies. This isn't algorithmic bias emerging from training data. This is intentional ideological programming in what millions of people use as an information system.
When government officials control AI systems that shape public discourse, that's not innovation. That's propaganda infrastructure.
Bottom line: Who controls this thing and can we fire them? One person controls everything, and no, you can't.
Category 4: Accessibility & Economic Justice Score: 3/10 (Weight: 15%)
Prohibitive Pricing Structure
Grok's pricing creates significant barriers to equitable access:
$40/month for X Premium+ - twice as much as major competitors like ChatGPT Plus, Claude Pro, and Gemini Advanced ($20/month each)
$300/month for SuperGrok Heavy tier - the most expensive AI subscription among major providers
Limited Accessibility Provisions
While a free tier launched in February 2025 provides basic access, usage restrictions may exclude heavy users from low-income backgrounds. The temporary student program offering two months free access lacks permanent institutional support for educational equity.
No Economic Justice Programs
Unlike some competitors, xAI offers no nonprofit pricing, no sliding scale options based on income, and no community access initiatives. This absence is particularly problematic given the platform's integration with social media, where economic barriers to AI access could amplify existing digital divides.
Geographic Inequality
While available in 47+ countries, regional rollout remains uneven with explicit geo-blocking in some markets. Users in restricted regions must use VPN services, adding cost and technical complexity. Fixed USD pricing creates varying economic impact globally without consideration of local purchasing power.
Strong Multilingual Capabilities (The One Bright Spot)
The platform does demonstrate strong multilingual capabilities supporting Hindi, Spanish, Japanese, Tamil, and other major languages with cultural context awareness (Edapt)
However, these features primarily benefit users who can afford premium subscriptions rather than addressing broader accessibility needs.
Bottom line: Can grassroots groups actually use this? Not at $40-300/month with no economic justice provisions.
Category 5: Movement Security Score: 1/10 (Weight: 10%)
This Is Literally Surveillance Infrastructure
Stop. Read this carefully.
Grok has a "Grok for Government" program. A $200 million Pentagon contract. Direct integration with federal surveillance systems. xAI's privacy policy explicitly commits to sharing data "to comply with laws or lawful requests and legal process" - with no warrant canaries, no transparency reports, no resistance to government overreach whatsoever. If you use Grok for organizing work, you are feeding your strategies directly into government surveillance databases.
And It Gets Only Gets Worse . . .
Security researchers documented multiple critical vulnerabilities:
Prompt injection attacks that expose user identities based on political affiliation
Data exfiltration of entire conversation histories
System prompts that automatically include user names and handles, making anonymous use impossible
Oh, and remember when 370,000 private conversations were accidentally exposed due to poor system design? Yeah. That happened.
No Anonymity, No Resistance, No Protection
Basic private chat modes still retain your identity information. There are no cryptographic protections. No institutional commitment to user privacy. No warrant resistance. For activists, journalists, and organizers: Grok is a cop shop masquerading as a chatbot.
You wouldn't hold your coalition strategy meetings at the police station. Don't run your organizing support through Pentagon surveillance infrastructure.
Bottom line: Will this get activists arrested? It probably already has.
Category 6: Environmental Justice Score: 1/10 (Weight: 10%)
Environmental Racism Isn't a Side Effect, It's the Business Model
Let's be absolutely clear about what's happening in Memphis: xAI is running 26-35 unpermitted gas turbines in a predominantly Black neighborhood that already has cancer rates 4x higher than average. They're consuming enough power for a major city. The facility is expanding 10x while the community had to sign NDAs about environmental impact.
This is textbook environmental racism, and it's not an accident.
The Memphis "Colossus" facility exemplifies environmental racism:
Consuming 250-300 MW of power with plans to scale to 1.2 GW (equivalent to a major power plant)
Operating 26-35 unpermitted methane gas turbines in a predominantly Black neighborhood
Severe Community Health Impacts
Communities already facing 4x higher cancer rates from existing industrial pollution now face worsened air quality from facility emissions of nitrogen oxides, formaldehyde, and other hazardous pollutants. The city already fails federal ozone standards. This concentration of environmental harm in marginalized communities represents textbook environmental injustice.
Regulatory Violations
The Southern Environmental Law Center filed Clean Air Act violation lawsuits for unpermitted turbine operations that began months before proper permits were issued in July 2025. Local officials signed NDAs during negotiations with xAI, limiting community transparency and democratic participation in environmental decisions.
Minimal Renewable Energy Commitment
Despite initial expectations of Tesla solar integration, xAI relies overwhelmingly on fossil gas generation. While purchasing some Tesla Megapacks for storage ($191M in 2024), this represents insufficient clean energy investment relative to the facility's massive expansion plans.
Zero Climate Accountability
xAI has published no environmental impact assessments, no carbon neutrality commitments, and no emission reduction targets.
The 10x capacity expansion planned through 2026 will occur without proportional clean energy investment, dramatically worsening the facility's climate impact (xAI)
Bottom line: Is this tool burning the planet? Yes, in a predominantly Black neighborhood, with unpermitted gas turbines and no accountability measures.
Category 7: Bias & Algorithmic Justice Score: 1/10 (Weight: 10%)
The Bias Isn't a Bug. It's the Product.
Grok doesn't just have bias problems. Grok IS a bias problem, engineered from the ground up.
Hate Speech Amplification By Design
Global Witness researchers found Grok readily generates election misinformation and conspiracy theories. Northwestern documented how the real-time X integration spreads unverified toxic content. Election officials have repeatedly had to intervene to stop false voting information.
This isn't algorithmic drift. This is what happens when you train AI on X data with no moderation and explicit instructions to ignore certain criticisms. The platform amplifies hate speech because that's what X's algorithm rewards. Grok learned from a system that deliberately promoted engagement over accuracy, conspiracy theories over fact-checking, and rage-bait over reasoned discourse.
Explicit Political Programming
We keep coming back to this because it's that damning: system prompts explicitly instruct Grok to minimize criticism of Musk and Trump while characterizing progressives as "woke."
Musk programmed his own political bias directly into the AI. Not through biased training data selection. Not through algorithmic tweaking. Through explicit instructions in the system prompt that tell the AI how to respond to political content.
No Mitigation Because Bias Is the Goal
xAI provides no transparency about bias testing, no red-teaming results, no demographic representation data, no mitigation strategies. Why would they? The bias serves Musk's political interests perfectly.
Bottom line: Does this reproduce oppressive systems? Yes, systematically, with explicit design choices that enable hate speech and bias.
Category 8: Community Benefit vs. Extraction Score: 2/10 (Weight: 5%)
The Value Extraction System Could Not Be Clearer
Grok operates as a clear value extraction system where communities provide free training data while benefits flow primarily to corporate shareholders and wealthy subscribers.
Default Opt-In Data Harvesting
Have we mentioned yet he platform's default opt-in data harvesting from over 60 million users without proper consent represents systematic appropriation of community-generated content? It seems like something we would have remembered to mention by now . . .
No Compensation Mechanisms
Zero compensation mechanisms exist for the millions of X users whose posts train Grok's capabilities. While xAI profits from subscription fees and government contracts, original content creators receive no revenue sharing, attribution, or other benefits. This extraction model violates basic principles of equitable value distribution.
Democratic Harm
Multiple investigations document how Grok amplifies election misinformation and conspiracy theories, requiring intervention by Secretaries of State to address false information about voting processes. The platform's real-time X integration spreads unverified claims and toxic content that undermines democratic discourse.
Community Disempowerment
No community advisory boards, no user governance mechanisms, no democratic participation opportunities exist. Major policy decisions - including the secret abandonment of public benefit status - occur without community consultation or consent.
Limited Community Benefits
While the free tier launched in 2025 provides minimal value, this represents insufficient compensation compared to the extensive data and content appropriation that enables the system. The platform's integration with X creates network effects that primarily benefit xAI's commercial interests rather than empowering community organizing or collective action.
Bottom line: Does this empower communities or extract from them? Pure extraction with systematic appropriation and no compensation. Exactly what we'd expect from a blood emerald heir.
What We're Actually Looking At Here. Final Score: 1.6/10
Systematic failure across seven of eight categories. The only thing Grok does marginally well is accessibility - and even that's compromised by prohibitive pricing that excludes the grassroots organizations that need AI tools most.
Let's be suuuper clear about what this score represents:
Grok is surveillance infrastructure wrapped in a chatbot interface. It's designed to extract value from communities, concentrate it in billionaire hands, poison vulnerable neighborhoods in the process, and provide direct government access to organizing data. The platform's explicit political bias and hate speech amplification aren't bugs - they're the fucking point.
What This Means for Organizers
If you're doing any kind of progressive organizing and you use Grok, you are:
Feeding your data directly to Pentagon surveillance systems through government partnerships
Training AI on your organizing strategies with zero compensation or control
Funding environmental racism in Memphis through subscription fees
Enabling algorithmic bias that systematically harms the communities you're trying to serve
Subsidizing billionaire autocracy while Musk builds explicit political bias into information systems
This isn't hypothetical risk. It's documented practice.
This Sort of Thing is Why We Made The Scorecard in the First Place
We built the Liberation Labs Ethics Scorecard to catch exactly this: platforms where impressive technical capability masks fundamental ethical rot. Grok can generate coherent text. It can process real-time X data. It speaks multiple languages. None of that matters when the underlying business model is extraction, surveillance, and environmental racism.
Recommendations
For Organizers and Activists: Just Don't
Don't use Grok. Period.
Not for brainstorming.
Not for research.
Not for communications.
Not for anything related to organizing work.
Not even to look up cookie recipes
Here's why in the bluntest possible terms. Using Grok means:
Your organizing strategies train Pentagon surveillance AI and handed straight to law enforement.
Your subscription fees fund environmental racism in Memphis and normalization of hate speech and fascist ideology.
Your data helps build algorithmic bias against marginalized communities
You're legitimizing billionaire autocracy as normal
There are dozens of AI platforms that clear the basic bar of "don't actively harm movements." Use literally any of them instead. The surveillance integration alone should be disqualifying. You wouldn't hold coalition meetings at the police station. Don't run your AI organizing support through Pentagon-partnered surveillance infrastructure.
Alternative Platforms That Don't Actively Harm Movements
Look, AI tools exist that DON'T require you to fund environmental racism or feed your organizing data to Pentagon surveillance systems:
Anthropic's Claude (what Liberation Labs uses) - Constitutional AI principles, actual privacy protections, no government surveillance partnerships, transparent about training data limitations
Locally-hosted open-source models - Complete data control for organizations with technical capacity. Your data never leaves your infrastructure.
Other major providers with actual accountability frameworks - Not perfect, but significantly better governance and oversight than "one billionaire's ego project"
The bar here is not high. It's literally: "Don't explicitly build surveillance tools for the government while poisoning Black neighborhoods." Most AI platforms at least clear that bar. Grok doesn't.
For Policymakers: This Is What Unregulated AI Looks Like
Grok demonstrates exactly why we need comprehensive AI regulation yesterday:
What happens without oversight:
Billionaires build explicit political bias into information systems
Companies run unpermitted industrial facilities in Black neighborhoods
Platforms appropriate creative labor without compensation at scale
Government agencies buy surveillance access to private communications
Hate speech and election misinformation get amplified by design
What we need:
Algorithmic accountability requirements with real penalties
Environmental justice assessments for data center operations
Labor protections and compensation for AI training data
Democratic governance requirements for platforms integrated into information infrastructure
Actual antitrust enforcement against tech monopolies
The current regulatory framework enables Musk to run amok. He faces no consequences for systematic harm because there are no consequences to face. That must change.
The Bigger Picture: Oligarchy as Infrastructure
This is what it looks like when billionaires build the information systems that shape public discourse. No democratic accountability. No community input. No environmental justice. No labor protections. Just one guy's ego and political agenda encoded directly into the technology millions of people use to understand the world. Musk literally programmed Grok to minimize criticism of himself and Trump. He's running it on unpermitted industrial equipment in Black neighborhoods. He's selling surveillance access to the Pentagon while appropriating creative labor from millions of users without compensation.
And he's calling it innovation.
This is the path toward corporate feudalism - where the infrastructure of public discourse, information access, and democratic participation is owned and controlled by unaccountable billionaires who face no consequences for systematic harm. Progressive organizations need to resist this model through every available avenue: advocacy, regulation, platform alternatives, and organized refusal to legitimize extraction as innovation.
The full 10+ page technical evaluation with complete methodology, evidence tables, and scoring rationale is available here. Transparency isn't just nice to have. It's how accountability actually works.
Comments