AI Against Humanity
Back to categories

Recruitment

Explore articles and analysis covering Recruitment in the context of AI's impact on humanity.

Articles

Peter Thiel’s big bet on solar-powered cow collars

April 4, 2026

Peter Thiel's Founders Fund is investing in innovative companies like Halter, a New Zealand startup that has developed solar-powered smart collars for cattle management. Founded by Craig Piggott, Halter's technology creates virtual fences, allowing farmers to monitor and control grazing patterns remotely, which can enhance land productivity by up to 20%. The collars also collect behavioral data to track animal health and fertility, and have been adopted by over a million cattle across more than 2,000 farms in New Zealand, Australia, and the U.S. Despite its successes, the rise of AI-driven agricultural solutions raises concerns about animal welfare, data privacy, and the potential over-reliance on technology in farming. As Halter competes with other companies like Merck, the implications of these technologies on traditional farming methods and animal treatment require careful consideration. With approximately $400 million raised, Halter aims for global expansion, recognizing a vast market opportunity while emphasizing the importance of delivering strong financial returns to farmers for widespread adoption.

Read Article

Meta Suspends Mercor Partnership After Breach

April 3, 2026

Meta has halted its collaboration with Mercor, a data vendor, following a significant data breach that may have compromised sensitive information regarding AI model training. This incident has raised alarms across the AI industry, prompting other major AI labs to reassess their partnerships with Mercor as they investigate the breach's extent. The breach not only threatens proprietary data but also highlights the vulnerabilities within the AI supply chain, where data vendors play a crucial role in shaping AI systems. The implications of such breaches extend beyond individual companies, potentially affecting the integrity and security of AI technologies as a whole. As AI systems become increasingly integrated into various sectors, the risks associated with data breaches and the exposure of sensitive information could undermine public trust and lead to broader societal consequences. The ongoing investigation into Mercor's security incident underscores the need for stringent data protection measures in the AI industry to safeguard against future risks and maintain the ethical deployment of AI technologies.

Read Article

Mercor Cyberattack Highlights Open Source Risks

April 1, 2026

Mercor, an AI recruiting startup, has confirmed it was affected by a security breach linked to a supply chain attack on the open-source project LiteLLM, associated with the hacking group TeamPCP. The incident has raised concerns about the security vulnerabilities in widely-used open-source software, as LiteLLM is downloaded millions of times daily. Following the breach, the extortion group Lapsus$ claimed responsibility for accessing Mercor's data, although the specifics of the data accessed remain unclear. Mercor collaborates with companies like OpenAI and Anthropic to train AI models, and the breach could potentially expose sensitive contractor and customer information. The company has stated it is conducting a thorough investigation with third-party forensics experts to address the incident and communicate with affected parties. This situation highlights the risks associated with the reliance on open-source software in AI systems, as vulnerabilities can lead to significant data breaches affecting numerous organizations.

Read Article

Mercor competitor Deccan AI raises $25M, sources experts from India

March 26, 2026

Deccan AI, a startup specializing in post-training data and evaluation for AI models, has raised $25 million to address the growing demand for AI training services. Founded in October 2024, the company primarily employs a workforce based in India, tapping into a network of over 1 million contributors, including students and domain experts. Deccan collaborates with leading AI labs like Google DeepMind and Snowflake to enhance AI capabilities and ensure reliability in real-world applications. However, the rapid growth of the company raises concerns about the working conditions and compensation for gig workers involved in generating training data. While Deccan emphasizes speed and quality, its reliance on a gig economy workforce poses risks of exploitation and inequities. Additionally, the challenges of maintaining quality assurance in post-training processes highlight the critical need for accurate, domain-specific data, as even minor errors can significantly affect model performance. This situation underscores the ethical considerations and potential systemic biases in AI deployment, emphasizing the importance of balancing efficiency with fair labor practices in the AI value chain.

Read Article

AI's Rising Threat to Legal Professions

February 6, 2026

The article highlights the recent advancements in AI's capabilities, particularly with Anthropic's Opus 4.6, which shows promising results in performing professional tasks like legal analysis. The score improvement, from under 25% to nearly 30%, raises concerns about the potential displacement of human lawyers as AI models evolve rapidly. Despite the current scores still being far from complete competency, the trend indicates a fast-paced development in AI that could eventually threaten various professions, particularly in sectors requiring complex problem-solving skills. The article emphasizes that while immediate job displacement may not be imminent, the increasing effectiveness of AI should prompt professionals to reconsider their roles and the future of their industries, as reliance on AI in legal and corporate environments may lead to significant shifts in job security and ethical implications regarding decision-making and accountability.

Read Article