top of page
Search

Liberation Labs AI Ethics Scorecard

Updated: Oct 8

Stop Using AI Tools That Sell You Out!

Our Ethics Scorecard: Because "move fast and break things" shouldn't mean breaking your movement.

Your organizing platform just updated its privacy policy. You know, that 47-page document nobody reads? Buried in paragraph 32: they're now sharing "anonymized" organizing data with law enforcement "upon lawful request." By the time you notice, six months of community contacts, meeting attendance patterns, and activist networks are already in a Palantir database. This AI Tools Ethics Scorecard is made to catch that before you ever sign up.

The Problem: Silicon Valley Doesn't Give a Shit About Your Movement

Let's be clear about something: the tech companies building AI tools are not your allies.

With very few exceptions, they're venture-capital funded, growth-obsessed, and their entire business model depends on extracting value from communities. Your organizing data is their product. Your community relationships are their training data. Your activist networks are their surveillance partnerships waiting to happen.

This isn't paranoia. It's pattern recognition:

  • Surveillance capitalism is real - Your "free" organizing tool sells community data to the highest bidder

  • Algorithmic bias is structural - Systems trained on oppressive data reproduce oppressive outcomes

  • Labor exploitation is foundational - That AI was built by content moderators sorting through stolen material for $2/hour and no healthcare

  • Security theater is standard - "Military-grade encryption" that cooperates with every government data request

  • Accessibility is an afterthought - English-only, expensive, and completely unusable for anyone with different abilities

The progressive movement can't be trapped between adopting tools that actively undermine our values because, or abandoning these incredibly powerful tools to our opponents. We need to have a framework to say "No, this hurts the work."

The Solution: A Scorecard That Actually Matters

The Liberation Labs AI Tools Ethics Scorecard evaluates technologies across 8 weighted categories that reflect what actually matters for organizing work. We're not measuring "innovation" or "disruption." We're measuring whether this tool will get your people surveilled, exploit workers, exclude marginalized communities, or extract value while giving nothing back.

The 8 Categories (And Why They Matter)

1. Data Rights & Privacy Protection (20%)

Can the cops get your organizing data?

We measure how well tools protect communities from surveillance, corporate extraction, and government monitoring. Because "we take privacy seriously" means nothing if they hand over your data with every subpoena.

10 points: Open source, local processing, no data retention, strong encryption

1 point: Extensive data harvesting, active surveillance partnerships, poor security

2. Labor Impact & Worker Justice (15%)

Who got exploited to build this?

AI doesn't emerge from the ether. Somebody trained it, somebody moderated its content, somebody built the infrastructure. We measure whether those workers were treated like humans or like disposable resources.

10 points: Augments human capacity, pays training data creators, strong labor practices

1 point: Explicitly designed to eliminate jobs, exploits unpaid labor.

3. Democratic Governance & Accountability (15%)

Who controls this thing and can we fire them?

Tech companies love to talk about "community" while maintaining absolute control over their platforms. We measure whether affected communities have actual power over the tools shaping their organizing.

10 points: Community-controlled, transparent governance, democratic decision-making

1 point: Opaque corporate control, no accountability mechanisms

4. Accessibility & Economic Justice (15%)

Can grassroots groups actually use this?

A tool that only works for well-funded 501(c)(3)s isn't a tool for the movement. We measure whether technology is accessible to organizations with limited resources and diverse communities.

10 points: Free/open source, multiple languages, full accessibility features

1 point: Expensive, poor accessibility, English-only

5. Movement Security (10%)

Will this get activists arrested?

This is separate from general privacy because movement security has specific requirements. We measure whether tools actively protect organizers from state surveillance and retaliation.

10 points: Strong encryption, anonymous use possible, resists state pressure

1 point: Poor security, active surveillance partnerships

Special note: Tools scoring below 5.0 here should be avoided for any sensitive organizing work. Period.

6. Environmental Justice (10%)

Is this tool burning the planet to generate text?

AI has massive environmental impact through compute requirements and energy consumption. Climate justice organizations shouldn't be using tools that contradict their values.

10 points: Renewable energy, minimal compute requirements, climate justice leadership

1 point: High environmental impact, poor climate practices

7. Bias & Algorithmic Justice (10%)

Does this reproduce oppressive systems?

Algorithmic bias isn't a bug - it's a feature of systems trained on data from oppressive societies. We measure whether tools actively work against bias or just reproduce it at scale.

10 points: Actively anti-oppressive, diverse training data, bias mitigation features

1 point: Perpetuates harmful biases, no mitigation efforts

8. Community Benefit vs. Extraction (5%)

Does this empower communities or extract from them?

The ultimate question: does this tool genuinely serve the movement, or does it treat organizing communities as resources to be mined?

10 points: Explicitly designed for community empowerment, gives back value

1 point: Pure extraction, communities used as product

How to Actually Use This

When you consider a new tool, or need to evaluate a tool you've already adopted, follow the link to the complete scorecard's page. The scorecard alone is a research rubric if you want to manually score your tools. However, we also crafted the scorecard itself as an engineered prompt. Select all of the text and copy/paste it into any LLM with a deep research function (we recommend Claude's research tool both for utility and ethics), along with an instruction to evaluate the tool you're scoring in a single prompt. As always, take the time to verify sources and confirm facts.

Each category gets scored 1-10, then weighted for a final score out of 10.0:

  • 9.0-10.0: Exemplary - Actively advances social justice organizing

  • 7.0-8.9: Good - Generally aligned with movement values

  • 5.0-6.9: Acceptable - Usable but watch for specific concerns

  • 3.0-4.9: Problematic - Significant ethical issues that require mitigation

  • 1.0-2.9: Avoid - Actively harmful to organizing goals

Critical thresholds:

  • Tools scoring below 5.0 in Movement Security? Don't use them for sensitive work.

  • Tools scoring below 4.0 in ANY category? You need explicit risk assessment before adoption.

  • Multiple tools in your stack with concerning scores? Consider cumulative surveillance and exploitation risk.

We've Already Done Some of the Work

Liberation Labs has evaluated several major AI platforms using this framework:

  • Claude - 6.4/10 The highest score we've found as of this writing.

  • Grok - 1.6/10 Exactly the nightmare garbage pile we're trying to steer society away from

  • OpenAI 2.25/10 There are still a few things they do right, but their recent steps toward profit over people isn't encouraging for the most well known frontier LLM

Every assessment includes our scoring rationale and evidence sources so you can validate our methodology or disagree with specific calls. This isn't a black box evaluation. It's transparent, community-verifiable, and built to be argued with.

What This Means for Your Organization

Before adopting any new AI tool:

  1. Run it through the scorecard - Score each category based on available evidence

  2. Set organizational thresholds - Decide your minimum acceptable scores (we recommend 5.0+ overall, 5.0+ in Movement Security)

  3. Document your reasoning - Keep records of why you scored things the way you did

  4. Reassess regularly - Tools and companies change; your evaluations should too

For your existing tech stack:

  1. Audit what you're already using - Score your current tools

  2. Identify concerning patterns - Look for systemic issues across multiple platforms

  3. Develop mitigation strategies - Some tools are necessary despite concerns; plan accordingly

  4. Build replacement timelines - Start migrating away from harmful tools


For movement-wide accountability:

  1. Share your evaluations - Let other organizations benefit from your research

  2. Pressure vendors - Companies improve when faced with values-based evaluation

  3. Build shared standards - Use this scorecard and other expert created tools to create movement-wide expectations

  4. Support better alternatives - Reward companies that actually give a shit about justice

Get the Complete Framework

[Download the Full Ethics Scorecard →]


The complete framework includes:

  • Detailed scoring criteria with specific benchmarks for each category

  • Complete methodology and weighting rationale

  • Implementation guidance for organizational adoption

  • Customization options for different organizing contexts (increase Labor weight for union work, increase Accessibility weight for community-serving orgs, etc.)

  • Example evaluations showing the framework in action

What's Next?


Liberation Labs isn't just about scorecards. We're building comprehensive infrastructure for AI-enhanced organizing:

  • [Grassroots Organizing System] - Plug the prompt engineered system into any LLM to make it a prosocial organizing strategy assistant.

  • [Free Organizing Tools →] - Simple prompts for everyday campaign work (no signup, no surveillance)

  • [Workshop Series →] - Training on using AI without getting played by it

  • Advanced Frameworks - Sophisticated organizing systems for complex campaigns (subscription tiers with sliding scale)

  • Regular Tool Evaluations - We publish ongoing assessments of platforms organizers actually use

The Bottom Line


You don't have to choose between effective organizing and ethical technology.

But you do have to stop adopting tools blindly just because everyone else is using them, or assuming every company and tool is rotten without real research. The Ethics Scorecard gives you a rigorous framework for making values-aligned decisions about technology. Because the surveillance state, corporate extraction, and algorithmic bias aren't going to fix themselves. We have to build infrastructure that actively resists these systems. That work starts with knowing which tools help the movement and which ones hurt it.

Liberation Labs: Building resistance infrastructure through ethical AI. All scorecards and frameworks provided as community resources under Creative Commons licensing.

Questions? Want us to evaluate a specific tool? [Contact Liberation Labs →]

 
 
 

Recent Posts

See All

Comments


 

© 2025 by Liberation Labs. Powered and secured by Wix 

 

bottom of page