Mapping the Wicked Structure of AI Governance

A multi-agent system that dynamically maps ethical, legal, technical, political, economic, and planetary dimensions of AI challenges — revealing complexity instead of simplifying it.

29 Factors
11 Stakeholders
171 Events
14 Dimensions

Knowledge Graph · 211 Nodes · 443 Edges

FactorDimensionScore

AI problems don't have solutions. They have structures.

Rittel & Webber (1973) defined wicked problems as challenges that resist technical solutions — where every attempt to fix one dimension creates new tensions in others. AI governance is a paradigmatic case.

No Stopping Rule

"Safe enough" is contested by every stakeholder. There is no point at which the problem is solved — only moments where the tension shifts.

Every Solution Changes the Problem

The EU AI Act intended to reduce risk. It simultaneously created compliance burdens, regulatory capture dynamics, and geopolitical asymmetries.

No Right to Be Wrong

Unlike scientific hypotheses, AI governance decisions are irreversible. Deployed systems create dependencies; withdrawn regulations leave gaps.

Conflicting Explanations

The same AI development is simultaneously a technical breakthrough, an economic disruption, a security threat, and an epistemic transformation.

No Definitive Formulation

There is no agreed-upon way to even state the problem. "AI safety" means something different to a developer, a regulator, a worker, and a person in the Global South.

Every Problem Is a Symptom

Algorithmic discrimination is a symptom of data inequality, which is a symptom of market concentration, which is a symptom of regulatory capture. The graph has no root node.

Highest-Ranked Factors

Scores reflect structural complexity, not urgency. A high score means the factor involves more stakeholders, deeper uncertainty, and stronger feedback loops.

Top 10 factors from the current monitoring cycle, April 2026.
Factor Score

An Instrument for Understanding

The Wicked AI Observatory is not a decision system — it is a comprehension system. It uses a knowledge graph of factors, stakeholders, and real-world events to reveal the evolving structure of AI governance challenges. Seven specialized AI agents continuously monitor, classify, and analyze developments across 14 dimensions.

Every analytical output carries provenance metadata, confidence scores, and an explicit distinction between machine-generated and human-confirmed assessments. Uncertainty is first-class content, not a disclaimer. Where the system lacks data, it signals silence rather than inventing confidence.

The target audience is the informed public: journalists, students, researchers, and policymakers who need to understand AI challenges at a depth that existing instruments — policy briefs, regulatory impact assessments, expert opinions — cannot provide.

WAO is a research prototype, developed as part of ongoing research in AI governance and complexity science. The system is open to collaboration from researchers whose disciplinary or geographic perspectives can strengthen its analytical scope.

Collaboration Welcome

We invite engagement from researchers in AI governance, complexity science, knowledge engineering, and science & technology studies.