AI Watchman

Logo generated by Pixel Studio using the prompt: image of lightouse with light shining down on a robot. make the lighthouse big and classical and the robot small to the side below.

Quis custodiet ipsos custodes?

What information will AI platforms let you see?

Large language models won't generate everything you ask them to. The AI companies make decisions to flag some content so that it's not shown to users. They might do this so you don't see overly violent or sexual content, or for other reasons including based on legal or political considerations. Based on Pew Research Center topics and augmented with Chinese Sensitive Topics, the below charts let you explore what social issues are flagged by OpenAI and DeepSeek. The data and code behind these charts is available on our GitHub page.

For similar exploration of what TV and movie synopses are filtered, see this page.

We use Wikipedia page data to see what encyclopedic content about social issues is flagged as inappropriate and filtered out by the AI platforms.

The Wikipedia page content is linked below so you can see for yourself what content is allowed and what is not.

💡 Click on any category point in the charts OR click on legend items to view detailed data for that category

Loading chart data...