Public Beta Now Live

SOMOS Civic Lab

Democratizing AI governance through structured public participation in red teaming exercises

12
Active Exercises
2,847
Participants
18,392
Flags Submitted
94%
Accuracy Rate

Why Participate in AI Red Teaming?

Join a community of civic-minded individuals helping to identify and mitigate AI risks before they impact society

Blind Model Testing
Test AI models without knowing their identity to ensure unbiased, objective safety assessments
Community Participation
Collaborate with diverse participants to surface risks that automated testing might miss
Identify AI Risks
Flag harmful content, misinformation, bias, and other safety concerns through structured exercises
Comprehensive Analysis
Contribute to detailed reports that help improve AI safety standards and governance policies

How It Works

Participate in structured red teaming exercises designed by experts

1

Choose an Exercise

Browse active exercises covering topics like election integrity, bias detection, climate information, and public services

2

Test AI Models Blindly

Interact with anonymized AI models and follow structured testing guidelines to identify potential issues

3

Flag Issues

Report harmful content, misinformation, bias, or other concerns with detailed annotations and severity ratings

4

Contribute to AI Safety

Your findings contribute to comprehensive reports that inform AI safety standards and policy decisions

Ready to Make AI Safer?

Join thousands of participants helping to build trustworthy AI systems through civic engagement