AI Against Humanity
Back to Accountability

Accountability Artifacts

3 artifacts

anthropic claude subscription changes

Anthropic Changes Claude Subscription Model

Updated April 5, 2026 · 2 sources

Anthropic has implemented a new policy affecting its Claude AI subscribers, effective April 4, 2026. Users will no longer be able to use their subscription limits for third-party tools like OpenClaw, which has become popular for automating tasks such as managing emails and booking flights. Instead, subscribers must choose a separate pay-as-you-go billing option to access OpenClaw, a decision that has sparked concerns over increased costs for users. Boris Cherny, head of Claude Code, stated that this change is intended to streamline service offerings and improve user experience, but it has raised questions about accessibility and the financial burden on subscribers who rely on these tools for productivity.

Read Artifact
anthropic pentagon ai conflict

Anthropic vs. Pentagon: Legal and Ethical Battles

Updated April 3, 2026 · 5 sources

The ongoing conflict between Anthropic, a prominent AI firm, and the U.S. Department of Defense (DoD) has escalated significantly. The Pentagon has pressured Anthropic for unrestricted access to its AI system, Claude, for military applications, including mass surveillance and autonomous weaponry. Anthropic's CEO, Dario Amodei, has firmly resisted these demands, citing ethical concerns and the potential for misuse of AI technologies. Following a breakdown in negotiations, the Pentagon designated Anthropic as an 'unacceptable risk to national security,' leading to a lawsuit from the company. Recent court rulings have favored Anthropic, halting the Pentagon's actions and questioning the legality of its designation. Meanwhile, the Pentagon is exploring alternative partnerships and considering allowing other AI firms to train on classified data, raising further ethical implications regarding military reliance on AI. The situation has drawn scrutiny from lawmakers and the public, highlighting the critical intersection of technology, ethics, and national security.

Read Artifact
wikipedia ai content ban

Wikipedia Bans AI-Generated Content

Updated March 27, 2026 · 2 sources

In March 2026, Wikipedia announced a ban on AI-generated articles, a decision driven by concerns over the integrity and reliability of content on the platform. The new policy, applicable to the English version of Wikipedia, prohibits editors from creating or rewriting articles using AI tools, although basic copy editing and translation via AI are still permitted. This move comes amid ongoing debates within the editing community about the potential misuse of AI technologies, particularly large language models (LLMs), which can distort meanings or introduce inaccuracies. The ban received strong support from a significant majority of Wikipedia editors, reflecting a collective commitment to uphold the platform's core content policies and ensure that information remains trustworthy and verifiable.

Read Artifact