top of page
Search

Claude Scored 6.4/10. Here's Why That's Actually Worth Celebrating (And Where They Still Need to Step Up)

We ran Anthropic's Claude through the Liberation Labs Ethics Scorecard. Unlike most AI companies, they're actually trying to build something that doesn't actively harm movements. That matters.


Final Score: 6.4/10 - Proceed with Optimism

Classification: Strong fit for organizations prioritizing AI safety, transparent governance, and creator rights


Look, after evaluating Grok's surveillance infrastructure masquerading as innovation, finding a platform that scores above 6.0 feels like discovering water in the desert. But this isn't just relief talking. Anthropic is doing genuinely innovative work in AI ethics - and more importantly, they seem to give a shit about getting better.


Why Liberation Labs Uses Claude


Full transparency: Claude is the LLM powering Liberation Labs' infrastructure.

We chose Anthropic after evaluating every major platform against our ethics framework. They earned our business for a lot of reasons.


They refuse government surveillance contracts. While Grok sells Pentagon access to organizing data, Anthropic explicitly rejects law enforcement partnerships without valid legal process. That's not just policy - that's a fundamental commitment to movement security.


They paid creators $1.5 billion. The first-of-its-kind copyright settlement with authors sets an industry precedent: ~$3,000 per book for 500,000 works. It's not perfect compensation, but it's 1.5 billion dollars more than most AI companies have paid.


They built Constitutional AI. Using 16 explicit principles derived from the UN Declaration of Human Rights to guide decision-making. Revolutionary transparency in how AI systems make ethical choices.


They created the Long-Term Benefit Trust. The most experimental democratic governance structure in AI - designed specifically to prevent OpenAI-style board collapse when profit motives conflict with safety.


This is what trying actually looks like in AI development.


What Anthropic Gets Right


Labor Practices: 8/10


Best-in-industry worker treatment:

  • 4.5/5 Glassdoor rating with 91% employee recommendation

  • $300K-$400K engineer base salaries (market-leading)

  • 22 weeks paid parental leave (far above U.S. standards)

  • Flat "Member of Technical Staff" structure reducing hierarchy

  • "High-trust, low-ego" culture with exceptional leadership transparency


Progressive hiring:

  • Actively seeks diverse perspectives without requiring PhDs or prior ML experience

  • Values "direct evidence of ability" over credentials

  • Sponsors visas and green cards

  • Mission-driven culture prioritizing safety over traditional corporate hierarchies


They're proving you can build AI systems without exploiting your workforce. That shouldn't be revolutionary, but in this industry it is.


Democratic Governance: 7/10


The Long-Term Benefit Trust is genuinely innovative:

  • Independent 5-member Trust with growing board control (majority within 4 years)

  • Trustees selected for expertise in AI safety, policy, and social impact

  • Public Benefit Corporation structure legally requiring public benefit consideration

  • Limits investor influence despite massive funding ($8B Amazon, $2B Google)


Transparency leadership:

  • Comprehensive Transparency Hub launched 2025 with platform security metrics

  • First-of-its-kind enterprise API usage data sharing

  • Detailed safety evaluation publications

  • Regular government and academic collaboration


This isn't perfect democracy - Trust members are appointed internally, not elected. But it's genuinely innovative compared to industry standards:

  • Better than OpenAI: No board instability or sudden CEO firings when safety conflicts with profit

  • Better than Google/Meta: Clear structural separation between commercial pressure and safety decisions

  • Better than Grok: Not answerable solely to one billionaire's political agenda


The Trust is an experiment in making AI governance actually accountable. That's more than most companies are even attempting.


Movement Security: 7/10


Actually protective policies:

  • Stricter surveillance resistance than competitors

  • SOC 2 Type 2 compliance and ISO 27001 certification

  • Automatic encryption of data in transit and at rest

  • Limited employee access based on least privilege principle

  • 30-day data retention by default (only extended with explicit consent)


Activism-worthy features:

  • Constitutional Classifiers designed to protect against exploitation

  • Multiple harm detection systems running simultaneously

  • Clear policies resisting government data requests without valid legal process

  • Privacy-preserving technologies including differential privacy

  • Incognito mode preventing ANY data use for model improvement

  • Data de-linking from user IDs before human review

  • Real-time classifier systems without conversation data storage


Translation: When you use Claude for organizing work, your data isn't training Pentagon AI or being handed to law enforcement on request.


Bias & Algorithmic Justice: 7/10


Unprecedented transparency:

  • 99.8% accuracy with only 0.21% bias on ambiguous questions (BBQ benchmark)

  • Systematic testing across 70 decision scenarios for discrimination

  • Comprehensive medical bias assessment identifying healthcare disparities

  • Government partnerships for independent evaluation (US and UK AI Safety Institutes)

Proactive harm mitigation:

  • Unified harm framework across five dimensions

  • Real-time Constitutional Classifiers blocking harmful queries

  • Chain-of-thought reasoning enabling step-by-step ethical decisions

  • Public research sharing on bias reduction techniques

This is what happens when AI safety is the actual mission, not just marketing copy.

Community Benefit: 8/10

Historic creator compensation:

  • $1.5 billion copyright settlement with authors (~$3,000 per book for 500,000 works)

  • First-of-its-kind industry precedent for retroactive creator payment

Social impact investment:

  • Dedicated Beneficial Deployments Team led by former Biden AI Safety Institute director

  • AI for Science program providing $20K credits for researchers

  • Partnership examples showing 100x capacity increases for social impact organizations

Open source contributions:

  • Model Context Protocol open-sourced November 2024, adopted by competitors

  • Extensive publication of safety research methodologies

  • Regular collaboration with academic institutions and policymakers

They're demonstrating that "responsible capitalism" can mean actual community value, not just profit extraction with better PR.


Where Anthropic Needs to Be Better


Data Rights Backsliding: 5/10


August 2025's policy deterioration is concerning. Anthropic abandoned "privacy-first" positioning by implementing opt-out data collection by default for consumer users. This aligns with industry practices that privacy advocates have been fighting for years.


The two-tier system is ethically problematic:

  • Enterprise customers: robust privacy protections, no training data use

  • Consumer users: default data collection requiring active opt-out by September 28, 2025

  • Data retention extended from 30 days to 5 years for users who don't opt out


This creates inequitable protection based on economic status - exactly what progressive privacy principles oppose. It also mirrors concerning industry trends where OpenAI, Google, and Microsoft all implement stronger protections for enterprise customers than consumers.


What they got right:

  • No commercial data sharing or advertising partnerships (unlike Perplexity, which sells to advertisers)

  • Constitutional AI designed to avoid disclosing personal information

  • User controls include conversation deletion, data export, and incognito mode

  • Stricter surveillance resistance than Meta, which actively partners with law enforcement


The problem: Privacy shouldn't depend on whether you can afford enterprise pricing. This erosion from earlier commitments suggests pressure from growth targets may be overriding ethical foundations.


What needs to happen: Return to opt-in consent architecture. If privacy is a genuine value, make it universal.


Environmental Justice: 3/10


Significant transparency gaps that are unacceptable for progressive organizing:

  • DitchCarbon score: 23/100 (below 67% of computer services companies)

  • No disclosed carbon emissions data for any scope

  • No documented reduction targets or climate commitments

  • Stanford Foundation Model Transparency Index shows "significant gaps in environmental disclosure"


Limited positive elements:

  • Claude-3.5 Sonnet rated most energy-efficient major AI model for inference

  • Reliance on AWS and Google Cloud provides indirect access to renewable energy commitments

  • Estimated training energy consumption relatively modest (~1.3 GWh)


Major competitors show 29-65% emissions increases but maintain formal net-zero commitments with public disclosure. Anthropic's environmental governance lags significantly behind industry standards. Google and Microsoft have climate justice programs and environmental advocacy positions. Anthropic has... nothing documented.


What needs to happen:

  • Publish comprehensive carbon emissions data

  • Establish reduction targets with accountability mechanisms

  • Create climate justice initiatives addressing AI's environmental impact

  • Partner with environmental justice organizations on data center community impacts


This isn't optional for organizations claiming progressive values. You can't build "ethical AI" while ignoring environmental racism and climate justice.


Accessibility: 6/10 (Mixed Results)


Where Anthropic leads industry:

  • $1 total pricing for all federal agencies - making AI accessible to public sector

  • Free premium access for entire universities (Northeastern partnership with training resources)

  • Up to $20K API credits for scientific researchers - supporting academic work

  • $20/month Pro tier - competitive with ChatGPT Plus and Gemini Advanced

  • Free tier with ~20 searches/day - provides basic access without payment


Economic barriers remain:

  • Enterprise pricing ($60/seat minimum 70 users) creates barriers for small organizations and grassroots groups that need team access but can't meet the minimum

  • Geographic restrictions in some countries due to regulatory frameworks

  • Strong multilingual support (12+ languages) but primarily focused on major languages


Compared to industry:

  • Better than Grok: Grok charges $40-300/month with no educational discounts or nonprofit programs

  • Similar to OpenAI/Google: Consumer pricing competitive, but all major providers lack robust nonprofit/sliding-scale options

  • Educational access leads industry: The federal agency and university programs are genuinely innovative


What progressive organizing needs:

  • Explicit nonprofit pricing programs (not just educational)

  • Sliding scale based on organizational budget

  • Community technology center partnerships

  • Pathways for grassroots groups that need team features but can't meet enterprise minimums


The educational access is impressive. But "economic justice" requires more than university partnerships - it requires meeting organizers where they are, with pricing that doesn't force trade-offs between AI capacity and paying staff.


The Pattern We're Seeing

Unlike Grok's systematic extraction and harm, Anthropic demonstrates a "responsible capitalism" approach:

  • Genuine innovations in safety governance

  • Constitutional AI

  • Long-Term Benefit Trust, transparent bias testing

  • Meaningful creator compensation 

  • $1.5 billion settlement setting industry precedent Strong labor practices

  • Best-in-industry worker treatment and progressive hiring

  • Movement-protective security

  • Refusing surveillance partnerships, strong encryption, limited data retention


But structural limitations remain: Environmental accountability lags industry - No carbon disclosure, no climate commitments, no environmental justice programs Privacy erosion under growth pressure - Two-tier system creating inequality Democratic participation still limited - Trust structure is experimental, not truly democratic


Why This Matters for Progressive Organizing


Anthropic proves ethical AI development is possible. They're not perfect. The privacy backsliding is concerning. The environmental gaps are unacceptable. The accessibility limitations matter. But they're engaging with these criticisms rather than dismissing them. They're publishing research, seeking external review, building accountability structures.

This is the difference between a company that fucks up and might actually fix it, vs. a company designed from the ground up to extract and harm.


When you use Claude for organizing work:

  • Your data isn't training Pentagon surveillance systems

  • Security vulnerabilities get patched, not weaponized

  • Privacy protections exist (though they should be stronger)

  • Constitutional AI actively prevents exploitation

  • The governance structure includes safety oversight

That's not everything we need. But it's a foundation we can build on.


Final Recommendation: For Progressive Organizations: Proceed with Clear Eyes

Claude is appropriate for:

  • Strategic planning and communications work

  • Research and analysis requiring nuanced understanding

  • Organizational capacity building

  • Content creation and editing

  • Technical documentation


With these considerations:

  • Use enterprise tier for strongest privacy protections if budget allows

  • Opt out of data collection on consumer tier

  • Avoid processing sensitive member data through any AI system

  • Monitor Anthropic's policy changes for further erosion

  • Demand environmental accountability improvements


For Anthropic: You're Doing Some Things Right - Now Finish the Job!


Data rights: Return to opt-in consent for all users. Privacy shouldn't cost $60/seat minimum.

Environmental justice: Publish emissions data, set reduction targets, create climate justice programs. This isn't optional for ethical AI.


Democratic governance: Experiment with actual community participation mechanisms. The Trust is innovative, but it's still appointed control.


Accessibility: Implement nonprofit pricing, sliding scale access, institutional partnerships. Economic justice requires economic access.


The creator compensation precedent, the safety research transparency, the governance innovation, the labor practices . . . You've proven it's possible to build AI systems that don't actively harm movements. Now prove you can build systems that actively support justice.


The Bigger Picture: Standards We Should Demand

Anthropic's 6.4/10 score reveals what's actually achievable when companies prioritize ethics:

Constitutional AI principles - Transparent ethical frameworks derived from human rights Democratic governance structures - Accountability beyond shareholder capitalism Creator compensation systems - Paying for the labor that trains AI Movement-protective security - Refusing surveillance partnerships Transparent safety research - Publishing methodologies for external review


The gap between 6.4 and 10.0 isn't technical capability - it's political will.

Environmental justice programs exist at Google and Microsoft. True democratic governance is possible through cooperative models. Universal privacy protection is a choice, not a technical limitation.


Progressive organizations should:

  • Use platforms that clear the basic bar (not actively harming movements)

  • Demand continuous improvement (closing the gap to real justice)

  • Build alternatives (cooperative AI governance, community-controlled models)

  • Push regulation (requiring what should be standard practice)


By keeping their core values in check and actually addressing the glaring areas for development, Anthropic might just be able to lead the charge in responsible AI advancement. But they’ve got to step up their game if they want to stay ahead. So, here’s hoping they bring their A-game to the table—because the world is watching, and it won’t settle for half-baked efforts.


Anthropic stands out as a pioneering force in the AI industry, driven by its commitment to safety and ethical considerations. While the company has made significant strides in innovation and fostering a responsible approach to artificial intelligence, there remain opportunities for improvement that could further enhance its impact. As we look to the future, Anthropic's potential for growth and its influence on the evolving AI landscape will be crucial in shaping a more ethical and beneficial technological environment. By continuing to prioritize its core values and addressing areas for development, Anthropic is well-positioned to lead the charge in responsible AI advancement. Liberation Labs is both proud to use Anthropic's Claude ecosystem for development, and ready to hold them accountable to improve the entire AI industry. Final Score: 6.4/10.


 
 
 

Recent Posts

See All
Liberation Labs AI Ethics Scorecard

Stop Using AI Tools That Sell You Out! Our Ethics Scorecard: Because "move fast and break things" shouldn't mean breaking your movement....

 
 
 

Comments


 

© 2025 by Liberation Labs. Powered and secured by Wix 

 

bottom of page