No Stopping Rule
"Safe enough" is contested by every stakeholder. There is no point at which the problem is solved — only moments where the tension shifts.
A multi-agent system that dynamically maps ethical, legal, technical, political, economic, and planetary dimensions of AI challenges — revealing complexity instead of simplifying it.
| Factor | Dimension | Score |
|---|
Why "Wicked"?
Rittel & Webber (1973) defined wicked problems as challenges that resist technical solutions — where every attempt to fix one dimension creates new tensions in others. AI governance is a paradigmatic case.
"Safe enough" is contested by every stakeholder. There is no point at which the problem is solved — only moments where the tension shifts.
The EU AI Act intended to reduce risk. It simultaneously created compliance burdens, regulatory capture dynamics, and geopolitical asymmetries.
Unlike scientific hypotheses, AI governance decisions are irreversible. Deployed systems create dependencies; withdrawn regulations leave gaps.
The same AI development is simultaneously a technical breakthrough, an economic disruption, a security threat, and an epistemic transformation.
There is no agreed-upon way to even state the problem. "AI safety" means something different to a developer, a regulator, a worker, and a person in the Global South.
Algorithmic discrimination is a symptom of data inequality, which is a symptom of market concentration, which is a symptom of regulatory capture. The graph has no root node.
Current Wickedness Scores
Scores reflect structural complexity, not urgency. A high score means the factor involves more stakeholders, deeper uncertainty, and stronger feedback loops.
| Factor | Score |
|---|
About the Project
The Wicked AI Observatory is not a decision system — it is a comprehension system. It uses a knowledge graph of factors, stakeholders, and real-world events to reveal the evolving structure of AI governance challenges. Seven specialized AI agents continuously monitor, classify, and analyze developments across 14 dimensions.
Every analytical output carries provenance metadata, confidence scores, and an explicit distinction between machine-generated and human-confirmed assessments. Uncertainty is first-class content, not a disclaimer. Where the system lacks data, it signals silence rather than inventing confidence.
The target audience is the informed public: journalists, students, researchers, and policymakers who need to understand AI challenges at a depth that existing instruments — policy briefs, regulatory impact assessments, expert opinions — cannot provide.
WAO is a research prototype, developed as part of ongoing research in AI governance and complexity science. The system is open to collaboration from researchers whose disciplinary or geographic perspectives can strengthen its analytical scope.
Get Involved
We invite engagement from researchers in AI governance, complexity science, knowledge engineering, and science & technology studies.