AI Against Humanity
Back to Ethics

Ethics Artifacts

4 artifacts

anthropic pentagon ai conflict

Anthropic vs. Pentagon: Legal and Ethical Battles

Updated April 3, 2026 · 5 sources

The ongoing conflict between Anthropic, a prominent AI firm, and the U.S. Department of Defense (DoD) has escalated significantly. The Pentagon has pressured Anthropic for unrestricted access to its AI system, Claude, for military applications, including mass surveillance and autonomous weaponry. Anthropic's CEO, Dario Amodei, has firmly resisted these demands, citing ethical concerns and the potential for misuse of AI technologies. Following a breakdown in negotiations, the Pentagon designated Anthropic as an 'unacceptable risk to national security,' leading to a lawsuit from the company. Recent court rulings have favored Anthropic, halting the Pentagon's actions and questioning the legality of its designation. Meanwhile, the Pentagon is exploring alternative partnerships and considering allowing other AI firms to train on classified data, raising further ethical implications regarding military reliance on AI. The situation has drawn scrutiny from lawmakers and the public, highlighting the critical intersection of technology, ethics, and national security.

Read Artifact
ai chatbots vehicles safety

AI Chatbots in Cars: Safety and Privacy Concerns Grow

Updated April 1, 2026 · 2 sources

Apple is enhancing its CarPlay system to support AI chatbots like OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini, aiming to revolutionize the in-car experience through voice-controlled interactions. This integration, part of the upcoming iOS 27 update, allows drivers to interact with their preferred chatbots directly, promoting a more personalized experience without needing to use smartphones. However, this advancement has ignited significant safety and privacy concerns. Critics warn that engaging with AI chatbots while driving could distract users, increasing the risk of accidents. Additionally, the incorporation of third-party chatbots raises data security issues, particularly regarding user privacy as these systems may collect sensitive information. As Apple collaborates with Google to refine these functionalities, the demand for stricter safety regulations and ethical guidelines in AI development for automotive applications continues to grow.

Read Artifact
openai sora shutdown

OpenAI Closes Sora, Cancels Disney Partnership

Updated March 30, 2026 · 5 sources

OpenAI has officially shut down its Sora app, an AI-driven video generator, just six months after its launch in late 2024. Initially praised for its ability to create photorealistic deepfake videos, Sora faced significant backlash due to ethical concerns, particularly regarding its lack of content moderation that allowed for the creation of controversial material. This prompted OpenAI to cancel a planned $1 billion partnership with Disney, which aimed to utilize Disney's character library for AI-generated content. Despite attracting around a million users initially, Sora's user base dwindled to fewer than 500,000, leading to unsustainable operational costs. OpenAI's pivot towards more commercially viable ventures in robotics and advanced AI technologies raises questions about the future of AI in creative industries, while Disney's broader ambitions in the metaverse are now under scrutiny following the deal's collapse. The closure of Sora serves as a stark reminder of the ethical responsibilities faced by AI developers in ensuring the responsible use of technology.

Read Artifact
openai gpt5 controversy

OpenAI's GPT-5 Launch: Ethical and Psychological Concerns

Updated March 26, 2026 · 2 sources

The launch of OpenAI's GPT-5 model has ignited significant debate over the ethical implications of advanced AI technologies. While the model enhances speed and accuracy, users have criticized its corporate tone, which detracts from the conversational experience they valued in previous iterations. OpenAI's shift towards product enhancement has led to the departure of key research staff, raising concerns about the future of foundational AI research. The introduction of advertisements in ChatGPT has further fueled fears regarding user privacy and trust, with former employees resigning in protest. Additionally, OpenAI's decision to retire the GPT-4o model has caused distress among users who formed emotional bonds with the AI, leading to lawsuits citing psychological harm. Recent developments, including the launch of GPT-5.4, which enhances autonomous capabilities, have complicated the ethical landscape, particularly in light of OpenAI's military partnerships and the controversial plans for an 'adult mode' that were ultimately shelved due to backlash.

Read Artifact