About
Our Mission
We believe that AI is never a neutral tool; it reflects the intentions of those who build and deploy it.
Malicious use of AI is not an outlier, it happens constantly, and mostly in plain sight.
Our aim is to show the bigger picture: we collect and organise the evidence of AI used Against Humanity.
Because when you see it all together, it becomes impossible to ignore.
How We Started
AI Against Humanity was born from a moment of clarity that struck beginning of 2026.
In the USA, ICE partnered with Palantir Technologies to use AI to track and deport immigrants.
This isn't some edge case or dystopian fiction. It is happening, with government approval, using sophisticated AI and Machine Learning tools.
It represents everything we fear about AI deployment: the target was vulnerable people, the methods were algorithmic.
Reality has truly surpassed what we could imagine.
AI is Not the Problem
We are not against artificial intelligence itself. AI has enormous potential for good: to solve problems, enhance human capability and create opportunities we didn't even imagine.
The problem is not the tool. The problem is human intention and capabilities.
At its core, an algorithm is just mathematics. It's a formula, a logical process, a system that parses data.
Facial recognition, in itself, is not inherently evil. Predictive algorithms, in themselves, are not inherently weapons.
Data analysis, in itself, is not inherently oppression.
However, context changes everything. When you use facial recognition to identify and arrest protesters, you've weaponized it.
When you use predictive policing, you've turned it into an instrument of discrimination.
Why AI Against Humanity Exists
We collect and group the evidence. We do so because visibility truly matters.
When the evidence is scattered, it's easy to deem it as outliers.
But when you see all of the evidence together, it becomes impossible to ignore.