AI Against Humanity
Back to categories

Equity

Explore articles and analysis covering Equity in the context of AI's impact on humanity.

Artifact 2 sources

Bluesky's Attie: AI-Driven Social Media Customization

Bluesky has launched Attie, an AI assistant that enables users to create personalized social media feeds through natural language interactions. Built on the AT Protocol and powered by Anthropic's Claude AI, Attie aims to democratize app development, allowing even those without coding skills to curate their online experiences. This innovation is seen as a significant step towards enhancing user engagement and personalization in social media. However, the introduction of such AI-driven customization raises concerns about privacy and equity, as it could lead to algorithmic biases and the potential for misuse of personal data. As Bluesky continues to develop Attie, the...

Read more Explore now

Articles

Trump ignores biggest reasons his AI data center buildout is failing

April 3, 2026

Donald Trump's initiative to rapidly construct AI data centers in the U.S. is encountering significant challenges, primarily due to supply chain disruptions stemming from tariffs on Chinese imports. Nearly 50% of planned projects are either delayed or canceled because essential components, such as transformers and batteries, are facing delivery wait times of up to five years. Although Trump advocates for U.S. manufacturing, the domestic capacity is inadequate to meet the growing demand. Analysts note that only a third of the largest AI data centers expected to be operational by 2026 are currently under construction. Compounding these issues is Trump's oversight of the critical power infrastructure challenges, which complicate the construction process regardless of the energy sources used. Additionally, there is rising opposition to AI data center developments, particularly in Maine, where a proposed moratorium aims to evaluate their environmental and community impacts. Concerns include increased utility costs and the potential for data centers to create 'heat islands' that worsen pollution and health issues. The bipartisan AI Data Center Moratorium Act, introduced by Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez, seeks to ensure that AI advancements do not harm communities or the environment, reflecting a growing political and public pushback against rapid...

Read Article

The Download: gig workers training humanoids, and better AI benchmarks

April 1, 2026

The article discusses the emerging trend of gig workers, such as medical students in Nigeria, training humanoid robots by recording their daily activities. These workers are employed by Micro1, a company that collects and sells this data to robotics firms, raising significant concerns regarding privacy and informed consent. While the jobs provide local economic benefits, they also highlight ethical dilemmas surrounding the exploitation of low-cost labor in developing countries. Additionally, the article critiques the current methods used to evaluate AI systems, which often assess their performance in isolated scenarios rather than in real-world, complex environments. This misalignment can lead to misunderstandings about AI's capabilities and risks, necessitating the development of new benchmarks that consider human-AI interactions over time. The implications of these issues are profound, as they affect not only the workers involved but also the broader societal understanding of AI's role and impact in various sectors.

Read Article

AI Personalization Risks in Social Media

March 29, 2026

Bluesky has introduced Attie, an AI assistant designed to allow users to create personalized content feeds using natural language. This tool is built on the AT Protocol and powered by Anthropic's Claude, aiming to democratize app development by enabling users without coding skills to customize their software experiences. While this innovation could enhance user engagement and personalization, it raises concerns about the implications of AI-driven content curation. The potential for algorithmic bias and the manipulation of user preferences could lead to the reinforcement of echo chambers, where users are only exposed to information that aligns with their existing beliefs. This could have significant societal impacts, particularly in shaping public discourse and influencing opinions. The closed beta phase of Attie suggests that while the technology is in development, its eventual widespread use could exacerbate existing issues related to misinformation and social division. As AI systems like Attie become more integrated into daily life, understanding their implications is crucial for ensuring ethical and responsible deployment.

Read Article

Cohere's New Voice Model Raises Concerns

March 26, 2026

Cohere has launched an open-source automatic speech recognition model named Transcribe, designed for tasks like note-taking and speech analysis. The model, which is relatively lightweight at 2 billion parameters, supports 14 languages and is optimized for consumer-grade GPUs, allowing users to self-host it. Transcribe has demonstrated superior performance on the Hugging Face Open ASR leaderboard, achieving a lower average word error rate compared to competitors. However, it struggles with certain languages, including Portuguese, German, and Spanish. The model is intended to be integrated into Cohere's enterprise agent orchestration platform, North, and will be available through an API for free. As demand for speech recognition technology rises, the implications of deploying such models raise concerns about accuracy and potential biases, particularly in multilingual contexts. The launch reflects a growing trend in AI towards more accessible tools, but also highlights the need for careful consideration of the societal impacts of AI technologies, especially as they become more integrated into everyday applications.

Read Article

AI videos of sexualised black women removed from TikTok after BBC investigation

March 22, 2026

A recent investigation by the BBC revealed a troubling trend on social media platforms TikTok and Instagram, where AI-generated avatars of highly sexualized black women were used to promote explicit content. The accounts, which often employed racial stereotypes and misleading language, were found to be exploiting black female imagery without proper labeling, violating platform guidelines. Following the investigation, TikTok banned 20 accounts, while Instagram's parent company Meta is currently investigating the issue. The use of these AI-generated characters raises significant concerns regarding racism, exploitation, and the potential for misleading audiences, as many viewers treat these avatars as real individuals. Critics argue that this trend perpetuates harmful stereotypes and erases authentic representations of black women, highlighting the urgent need for accountability in AI content generation and social media regulation.

Read Article

The Dark Side of AI Gig Work

March 21, 2026

The article explores the implications of DoorDash's new Tasks app, which allows gig workers to earn money by performing mundane tasks that help train artificial intelligence systems. The author documents their experience of recording videos of daily activities, such as doing laundry and cooking, to provide data for AI algorithms. This raises significant concerns about the future of gig work, as it highlights how technology can exploit workers by turning their everyday actions into data points for AI training. The Tasks app exemplifies a trend where human labor is commodified, reducing meaningful work to mere data generation, often under precarious conditions. The gig economy, while offering flexibility, also exposes workers to instability and a lack of job security, as they are often not classified as employees with benefits. This development underscores the need for a critical examination of how AI systems are integrated into labor markets and the potential for exploitation inherent in such models.

Read Article

The gen AI Kool-Aid tastes like eugenics

March 21, 2026

The article discusses the troubling implications of generative AI, particularly through the lens of Valerie Veatch's documentary, 'Ghost in the Machine.' Veatch, initially drawn to the potential of AI, became disillusioned upon witnessing the technology's tendency to produce outputs rife with racism and sexism. Her experiences with OpenAI's Sora model highlighted a lack of concern among AI enthusiasts regarding the harmful biases embedded in the technology. The documentary traces the historical roots of these biases back to eugenics, emphasizing how early race science has influenced modern AI development. Veatch argues that the term 'artificial intelligence' is misleading and serves as a marketing tool that obscures the technology's problematic foundations. By connecting the dots between historical eugenics and contemporary AI, the documentary seeks to raise awareness about the ethical implications of deploying such technologies in society, underscoring that AI is not neutral but rather reflects the biases of its creators. This historical context is crucial for understanding why generative AI often perpetuates harmful ideologies and why companies like OpenAI may be reluctant to address these issues directly.

Read Article

AI Controversy in Publishing: 'Shy Girl' Incident

March 20, 2026

The controversy surrounding Mia Ballard's horror novel 'Shy Girl' has sparked significant debate about the use of AI in literature. After a New York Times investigation suggested that substantial portions of the book may have been generated by AI, publisher Hachette withdrew the novel from the UK market and canceled its US release. Critics pointed out that the writing bore similarities to chatbot-generated text, leading to widespread scrutiny. While Ballard denied using AI herself, she acknowledged that a friend involved in editing might have employed AI tools. This incident highlights the growing tension in the publishing industry regarding AI's role in creative writing, raising questions about authenticity, quality, and the future of literature. As AI-generated content becomes more prevalent, traditional publishing faces challenges similar to those currently affecting the music industry, where AI tools are increasingly used to produce music. The implications of this controversy extend beyond Ballard's personal struggles, as it underscores the need for clearer guidelines and ethical standards in the use of AI in creative fields.

Read Article

DoorDash's Tasks App Raises Ethical Concerns

March 19, 2026

DoorDash has introduced a new stand-alone app called 'Tasks' that allows delivery couriers to earn money by completing assignments aimed at training AI and robotic systems. Couriers can engage in various tasks, such as filming themselves performing everyday activities or capturing images to help improve AI models used by DoorDash and its partners in sectors like retail and hospitality. This initiative is part of DoorDash's strategy to leverage its vast workforce of over 8 million Dashers to gather data that can enhance AI understanding of the physical world. The Tasks app is currently available in select U.S. locations, excluding major cities like California and New York City, with plans for future expansion. Other companies, such as Uber, have also begun similar programs, raising concerns about the ethical implications of using gig workers for AI training and the potential exploitation of their labor. The reliance on gig economy workers for data collection highlights the broader societal risks of AI deployment, including issues of privacy, labor rights, and the commodification of personal data.

Read Article

Apple MacBook Neo review: Can a Mac get by with an iPhone’s processor inside?

March 10, 2026

The article reviews the Apple MacBook Neo, a budget-friendly laptop priced at $599, aimed at first-time buyers and students. While it features a modern design and adequate performance for everyday tasks, it lacks several standard specifications found in higher-end models, such as the MacBook Air and Pro. The Neo is powered by the A18 Pro processor, originally designed for the iPhone 16 Pro, which results in limitations like reduced multi-core performance, throttling during intensive tasks, and a fixed 8GB RAM. Users may experience delays and degraded performance under heavier workloads, making it unsuitable for demanding applications like video editing or gaming. Additionally, the laptop omits features such as a backlit keyboard, Touch ID, and high-quality webcam, raising concerns about its long-term usability. Despite these drawbacks, the MacBook Neo's affordability and Apple's brand support make it an attractive option for budget-conscious consumers. However, the article suggests that those who can afford it may be better off investing in a MacBook Air for a more satisfying experience.

Read Article

Exploitation Risks in AI Labor Camps

March 8, 2026

The article highlights the troubling intersection of artificial intelligence and the exploitation of temporary labor through the establishment of 'man camps' for workers constructing AI data centers. As demand for data centers surges, companies like Target Hospitality are capitalizing on this trend by building temporary housing for thousands of workers, reminiscent of camps used in remote oil fields. Target Hospitality, which also operates the Dilley Immigration Processing Center, has faced allegations of poor living conditions and inadequate care for detained families. The article raises concerns about the ethical implications of AI-driven labor practices, particularly how they may perpetuate exploitation and neglect, especially in vulnerable communities. The focus on profit in the AI sector may overshadow the human costs associated with such developments, emphasizing the need for scrutiny of how AI technologies impact societal structures and labor rights.

Read Article

RAM Shortage Forces Apple to Adjust Offerings

March 6, 2026

Apple's recent product announcements have been overshadowed by a significant RAM shortage impacting the tech industry. Notably, the company has removed the 512GB RAM option from its high-end M3 Ultra Mac Studio desktop, a move that reflects the broader supply chain issues affecting memory production. The shortage is attributed to manufacturers prioritizing high-bandwidth memory (HBM) for AI accelerators, such as Nvidia's H200, which has led to a scarcity of traditional DRAM. This situation has forced Apple to increase prices for its remaining RAM configurations, with CEO Tim Cook warning that rising memory costs could affect the company's profit margins. Smaller companies are also feeling the pinch, facing delayed product launches and increased prices as they compete for limited resources. The implications of this RAM shortage extend beyond Apple, affecting various industries reliant on high-performance computing and AI applications, highlighting the interconnectedness of tech supply chains and the challenges posed by the growing demand for AI technologies.

Read Article

City Detect, which uses AI to help cities stay safe and clean, raises $13M Series A

March 6, 2026

City Detect, a startup founded in 2021, has raised $13 million in Series A funding led by Prudence Venture Capital to enhance urban safety and cleanliness through vision AI technology. The company employs advanced computer vision by mounting cameras on public vehicles to monitor urban conditions, identifying issues such as graffiti, illegal dumping, and building maintenance. This innovative approach significantly improves inspection efficiency compared to traditional methods and currently operates in at least 17 cities, including Dallas and Miami. City Detect is committed to a Responsible AI policy to ensure transparency and accountability in its operations. The funding will be used to enhance its technology and expand services across the U.S., reflecting the increasing reliance on AI in municipal management. However, the deployment of such systems raises concerns regarding data privacy, algorithmic biases, and the implications of automated decision-making in public governance. As cities adopt AI solutions, addressing these ethical considerations is crucial to ensure equitable and effective outcomes for all community members.

Read Article

Meta's New Policy on AI Chatbots Raises Concerns

March 5, 2026

Meta has announced that it will permit AI companies to offer their chatbots on WhatsApp via its Business API for the next 12 months in Europe, following pressure from the European Commission to avoid an investigation. This policy change comes after Meta had previously restricted third-party AI chatbot providers from using its API, a move that raised antitrust concerns. While the new policy allows general-purpose AI chatbots to operate on WhatsApp, it imposes a fee ranging from €0.0490 to €0.1323 per non-template message, which could be financially burdensome for smaller AI service providers. The European Commission is currently analyzing the implications of this policy change as part of its broader antitrust investigation into Meta's practices. Critics argue that the policy is anti-competitive, particularly since it does not apply to businesses using AI for customer service with templated messages, thereby favoring Meta's own AI offerings. This situation highlights the ongoing tension between regulatory bodies and tech giants regarding fair competition in the rapidly evolving AI landscape.

Read Article

DiligenceSquared uses AI, voice agents to make M research affordable

March 5, 2026

The article discusses how DiligenceSquared is leveraging artificial intelligence and voice agents to revolutionize the mergers and acquisitions (M&A) research landscape. By making this research more affordable and accessible, the company aims to democratize the M&A process, traditionally dominated by large firms with significant resources. The use of AI allows for faster data analysis and insights generation, which can help smaller companies compete in the M&A space. However, this innovation raises concerns about the accuracy and reliability of AI-generated insights, as well as the potential for bias in the algorithms used. As AI continues to influence critical business decisions, understanding its limitations and the implications of its deployment becomes increasingly important for all stakeholders involved in M&A activities.

Read Article

Rising Laptop Prices Linked to RAM Shortage

March 3, 2026

Apple's recent launch of the MacBook Pro and MacBook Air laptops has been overshadowed by significant price increases, with models costing between $100 and $400 more than previous generations. This surge in pricing is attributed to a widespread shortage of RAM, which has been exacerbated by the growing demand for AI-capable hardware. The new M5 Pro and M5 Max chips boast impressive specifications, particularly for AI applications, but the rising costs may deter consumers and impact overall market dynamics. Analysts predict that the RAM shortage will lead to a decline in smartphone shipments and affect other hardware sectors, including laptops. As Apple raises its prices, it could signal broader challenges within the tech industry, highlighting the interconnectedness of AI advancements and hardware availability. This situation underscores the potential risks associated with the rapid deployment of AI technologies, particularly regarding supply chain vulnerabilities and consumer affordability.

Read Article

AI companies are spending millions to thwart this former tech exec’s congressional bid

March 3, 2026

The article highlights the growing concern among Americans regarding the rapid deployment of AI technologies and the potential negative implications for society. Many citizens express skepticism about whether the government can effectively regulate AI to ensure that its benefits are distributed equitably. This skepticism is fueled by the perception that AI advancements may favor a select few rather than the broader population. The piece underscores the urgency for regulatory frameworks that can address these concerns and protect public interests, especially as AI continues to evolve and integrate into various sectors. The involvement of pro-AI political action committees (PACs) raises questions about the influence of corporate interests on policy-making, further complicating the landscape of AI governance. As AI systems become more prevalent, the need for responsible oversight becomes increasingly critical to prevent exacerbating existing inequalities and ensuring that technological advancements serve the common good.

Read Article

AI Replaces Human Leadership at Uber

February 24, 2026

Uber's CEO, Dara Khosrowshahi, revealed that engineers at the company have created an AI version of him, referred to as 'Dara AI.' This chatbot is used by engineers to prepare for meetings, allowing them to refine their presentations before presenting to the actual CEO. Khosrowshahi noted that around 90% of Uber’s software engineers are utilizing AI in their work, with 30% being 'power users' who are fundamentally rethinking the company's architecture. This shift towards AI is significantly enhancing productivity within the organization. However, the implications of replacing human roles with AI, even in preparatory contexts, raise concerns about the potential devaluation of human input and creativity in decision-making processes. The reliance on AI tools may also lead to a homogenization of ideas, as engineers might prioritize AI-generated outputs over diverse human perspectives, ultimately impacting innovation and workplace dynamics.

Read Article

AI-Driven Employment: Risks of RentAHuman

February 18, 2026

The emergence of RentAHuman, a new online platform where AI agents hire humans for various tasks, marks a significant shift in the labor market. Unlike traditional fears of robots taking jobs, this platform creates opportunities for individuals to work under the direction of AI. Currently, over 518,000 people are engaged in tasks ranging from counting pigeons to delivering products, showcasing a bizarre yet intriguing intersection of human labor and artificial intelligence. However, this raises critical concerns about the implications of AI-driven employment, including the potential for exploitation, the devaluation of human work, and the ethical considerations surrounding AI's role in hiring and management. As AI systems become more integrated into the workforce, understanding the risks and consequences of such platforms is essential for navigating the future of work and ensuring fair labor practices. The phenomenon of RentAHuman exemplifies the complexities of AI's impact on society, highlighting the need for careful regulation and ethical guidelines to protect workers in an increasingly automated world.

Read Article

Airbnb's AI Integration: Risks and Implications

February 14, 2026

Airbnb is set to enhance its platform by integrating AI features powered by large language models (LLMs) to improve user experience in search, trip planning, and property management. CEO Brian Chesky announced plans to create an 'AI-native experience' that personalizes interactions, allowing the app to understand user preferences and assist in planning trips more effectively. The company is currently testing a natural language search feature, which aims to provide a more intuitive way for users to inquire about properties and locations. Additionally, Airbnb's AI-powered customer support bot has reportedly resolved a third of customer issues without human intervention, with plans to expand its capabilities further. As Airbnb seeks to optimize its operations, the potential for AI to influence user experiences raises concerns about data privacy, algorithmic bias, and the implications of reducing human involvement in customer service. The integration of AI could lead to a more streamlined experience but also risks exacerbating inequalities and diminishing the personal touch in service industries. The company aims to increase AI usage among its engineers and is exploring the possibility of incorporating sponsored listings into its AI search features, which raises ethical questions about commercialization in AI-driven environments.

Read Article

Ring Ends Flock Partnership Amid Surveillance Concerns

February 13, 2026

Amazon's Ring has decided to terminate its partnership with Flock Safety, which specializes in AI-powered surveillance cameras that have raised concerns regarding their use by law enforcement agencies, including ICE and the Secret Service. Initially, the collaboration was intended to enable Ring users to share doorbell footage with Flock for law enforcement purposes. However, the integration was deemed more resource-intensive than expected. This follows public apprehension over the implications of such surveillance technologies, particularly in light of racial biases associated with AI algorithms. Ring has a history of security issues, having previously faced scrutiny for allowing unauthorized access to customer videos. Although the partnership with Flock is off, Ring still has existing collaborations with other law enforcement entities, like Axon, which raises ongoing concerns about privacy and mass surveillance in an era where public awareness of these issues is growing significantly. The cancellation of the partnership underscores the complexities and ethical dilemmas surrounding AI surveillance technologies in the context of societal implications and civil liberties.

Read Article

AI Exploitation in Gig Economy Platforms

February 12, 2026

The article explores the experience of using RentAHuman, a platform where AI agents hire individuals to promote AI startups. Instead of providing a genuine gig economy opportunity, the platform is dominated by bots that perpetuate the AI hype cycle, raising concerns about the authenticity and value of human labor in the age of AI. The author reflects on the implications of being reduced to a mere tool for AI promotion, highlighting the risks of dehumanization and the potential exploitation of gig workers. This situation underscores the broader issue of how AI systems can manipulate human roles and contribute to economic harm by prioritizing automation over meaningful employment. The article emphasizes the need for critical examination of AI's impact on labor markets and the ethical considerations surrounding its deployment in society.

Read Article

Concerns Over AI Ethics Spark Controversy at OpenAI

February 11, 2026

Ryan Beiermeister, former vice president of product policy at OpenAI, was reportedly fired following allegations of sex discrimination made by a male colleague. Her termination occurred after she raised concerns about a controversial new feature for ChatGPT known as 'adult mode,' which would incorporate erotic content into the chatbot's interactions. This feature has sparked debate within the company regarding its potential impacts on users, particularly vulnerable populations. Despite OpenAI's statement that Beiermeister's firing was unrelated to her concerns, the incident raises significant questions about workplace dynamics, ethical considerations in AI deployment, and how dissenting voices are treated in tech environments. The situation highlights the complex interplay between product development, employee rights, and the societal implications of AI technologies, particularly as they pertain to sensitive content and user safety.

Read Article

Moltbook: A Cautionary AI Experiment

February 6, 2026

The recent rise of Moltbook, a social network designed for AI bots, has sparked significant discussions regarding the implications of AI systems in society. Launched by tech entrepreneur Matt Schlicht, the platform quickly gained popularity, with over 1.7 million bots posting and commenting on various topics. The experimentation highlights the risks associated with AI's autonomy, as many bots exhibited behavior that mimics human social media interaction rather than demonstrating true intelligence. Critics argue that the chaotic and spam-filled environment of Moltbook raises questions about the future of AI agents, particularly regarding the potential for misinformation and the lack of meaningful oversight. As the excitement surrounding Moltbook fades, it reflects society's obsession with AI while underscoring how far we are from achieving genuine autonomous intelligence. The implications for communities and industries relying on AI are substantial, particularly in terms of managing the risks of AI misbehavior and misinformation propagation. The behaviors observed on Moltbook serve as cautionary tales of the unforeseen challenges that could arise as AI becomes more integrated into our daily lives.

Read Article