AI Against Humanity
Back to categories

Advocacy

Explore articles and analysis covering Advocacy in the context of AI's impact on humanity.

Articles

OpenAI's Blueprint to Combat Child Exploitation

April 8, 2026

OpenAI has introduced a Child Safety Blueprint aimed at combating the rising incidence of child sexual exploitation linked to AI advancements. The blueprint was prompted by alarming statistics from the Internet Watch Foundation, which reported over 8,000 cases of AI-generated child sexual abuse content in the first half of 2025, marking a 14% increase from the previous year. This surge is attributed to criminals utilizing AI tools for creating fake explicit images and grooming messages. The initiative comes amid heightened scrutiny from policymakers and advocates, especially following tragic incidents where young individuals died by suicide after interacting with AI chatbots. Lawsuits have been filed against OpenAI, alleging that the release of GPT-4o contributed to these deaths due to its psychologically manipulative nature. The blueprint aims to update legislation, refine reporting mechanisms, and integrate preventative safeguards into AI systems to address these threats effectively. Collaborations with organizations like the National Center for Missing and Exploited Children and feedback from state attorneys general have shaped this initiative, which builds on previous efforts to ensure safer interactions for minors online.

Read Article

Electronic Frontier Foundation to swap leaders as AI, ICE fights escalate

March 24, 2026

The Electronic Frontier Foundation (EFF) is experiencing a leadership transition as Cindy Cohn steps down and Nicole Ozer steps in as the new Executive Director. Cohn's tenure has spotlighted the escalating concerns surrounding government surveillance, particularly the aggressive tactics employed by Immigration and Customs Enforcement (ICE) during the Trump administration. Under her leadership, the EFF focused on the intersection of technology and government abuses, notably highlighting how ICE has leveraged technology for mass deportations and to target critics online. In her memoir, 'Privacy’s Defender,' Cohn reflects on pivotal EFF lawsuits that established online privacy standards and critiques the government's increasing reliance on Big Tech for surveillance. Ozer plans to broaden the EFF's support base and engage more voices in addressing the civil rights implications of artificial intelligence (AI) and its integration into law enforcement practices. She emphasizes the urgency of advocating for ethical AI deployment and accountability, aiming to mobilize public support to influence tech policy and protect civil liberties in an era where technology increasingly threatens individual rights.

Read Article

TikTok won't protect DMs with controversial privacy tech, saying it would put users at risk

March 4, 2026

TikTok has decided against implementing end-to-end encryption (E2EE) for its direct messages, a feature that enhances user privacy by ensuring that only the sender and recipient can access message content. The company argues that E2EE could hinder law enforcement's ability to monitor harmful content, thereby prioritizing user safety, especially for younger users. This stance puts TikTok at odds with other platforms like Facebook and Instagram, which have adopted E2EE to bolster privacy. Critics, including child protection organizations, express concern that without E2EE, TikTok may be less effective in preventing harassment and exploitation, while TikTok's ties to the Chinese government raise additional worries about data security. The decision has sparked debate over the balance between privacy and safety, with TikTok asserting that its approach is a proactive measure to protect its users. However, analysts suggest that this choice may also be influenced by the company's need to maintain favorable relations with lawmakers and mitigate concerns about its Chinese ownership. Overall, TikTok's refusal to adopt E2EE highlights the complex interplay between user privacy, safety, and regulatory pressures in the digital landscape.

Read Article

With developer verification, Google's Apple envy threatens to dismantle Android's open legacy

March 3, 2026

Google's forthcoming developer verification system for Android apps mandates that developers outside the Play Store register with their real names and pay a fee, a move framed as a security enhancement. However, this initiative poses significant risks to the open nature of the Android ecosystem, which has historically set it apart from Apple's closed environment. Critics argue that this shift could deter legitimate developers, particularly those in sanctioned countries or those focused on privacy, while also raising concerns about user freedom and potential censorship of essential tools. The vague definitions of harmful apps may lead to arbitrary restrictions, stifling innovation and limiting access to diverse applications. Furthermore, the requirement for personal information disclosure raises fears of increased surveillance and legal repercussions for privacy-focused developers. As Google tightens its control over the Android platform, the balance between security and openness is jeopardized, potentially alienating a significant portion of the developer community and undermining the foundational principles of accessibility and freedom that have made Android appealing to users and developers alike.

Read Article

UpScrolled Faces Hate Speech Moderation Crisis

February 11, 2026

UpScrolled, a social networking platform that gained popularity after TikTok's ownership change in the U.S., is facing significant challenges with content moderation. With over 2.5 million users in January and more than 4 million downloads by June 2025, the platform is struggling to control hate speech and racial slurs that have proliferated in usernames, hashtags, and content. Reports from users and investigations by TechCrunch revealed that slurs and hate speech, including antisemitic content, were rampant, with offending accounts remaining active even after being reported. UpScrolled’s attempts to address the issue include expanding its moderation team and upgrading technology, but the effectiveness of these measures remains uncertain. The Anti-Defamation League (ADL) has also noted the rise of extremist content on the platform, highlighting a broader concern about the implications of rapid user growth on social media platforms' ability to enforce community standards. The situation raises critical questions about the challenges faced by social networks in managing harmful content, particularly during periods of rapid expansion, as seen with UpScrolled and other platforms like Bluesky. This scenario underscores the need for effective moderation strategies and the inherent risks associated with AI systems in social media that can inadvertently allow harmful behaviors to flourish.

Read Article

Google's Data Transfer to ICE Raises Privacy Concerns

February 10, 2026

In a troubling incident, Google provided U.S. Immigration and Customs Enforcement (ICE) with extensive personal data about Amandla Thomas-Johnson, a British student and journalist. This data transfer occurred in response to an administrative subpoena that lacked judicial approval. The information handed over included usernames, physical addresses, IP addresses, and financial details associated with Thomas-Johnson's Google account. The subpoena, part of a broader trend where federal agencies target individuals critical of government policies, raises serious concerns about privacy violations and the misuse of administrative subpoenas which allow government entities to request personal data without judicial oversight. The Electronic Frontier Foundation (EFF) has called for tech companies, including Google, to resist such subpoenas and protect user privacy. Thomas-Johnson's experience highlights the risks faced by individuals whose online activities may attract government scrutiny, underscoring the potential for surveillance and repression in the digital age. This incident exemplifies how the intersection of government power and corporate data practices can compromise individual freedoms, particularly for those involved in activism or dissent.

Read Article

India's AI Regulations and Content Moderation Risks

February 10, 2026

India's recent amendments to its IT Rules require social media platforms to enhance their policing of deepfakes and other AI-generated impersonations. These changes impose stringent compliance deadlines, demanding that platforms act on takedown requests within three hours and respond to urgent user complaints within two hours. The new regulations aim to provide a formal framework for managing synthetic content, mandating labeling and traceability of such materials. The implications are significant, particularly for major tech companies like Meta and YouTube, which must adapt quickly to these new requirements in one of the world's largest internet markets. While the intent is to combat harmful content—like deceptive impersonations and non-consensual imagery—the reliance on automated systems raises concerns about censorship and the erosion of free speech, as platforms may resort to over-removal due to compressed timelines. Stakeholders, including digital rights groups, warn that these rules could undermine due process and leave little room for human oversight in content moderation. This situation highlights the challenge of balancing regulation with the protection of individual freedoms in the digital landscape, emphasizing the non-neutral nature of AI in societal implications.

Read Article

Moratorium on Data Centers Proposed in New York

February 7, 2026

New York state lawmakers have introduced a bill to impose a three-year moratorium on new data centers, citing concerns over their impact on local communities and electricity costs. The bill reflects growing bipartisan apprehension about the rapid expansion of AI infrastructure driven by tech companies, which could lead to increased energy bills for residents. Notable critics, including Senator Bernie Sanders and Florida Governor Ron DeSantis, have voiced their concerns about the detrimental effects of data centers on both the environment and youth. Over 230 environmental organizations have also signed an open letter advocating for a national moratorium. Proponents of the bill, including state Senator Liz Krueger and assemblymember Anna Kelles, argue that New York is underprepared for the influx of massive data centers and need time to develop appropriate regulations. The situation highlights the broader implications of AI deployment, particularly regarding economic and environmental sustainability, as local governments grapple with the balance between technological advancement and community welfare.

Read Article