AI Against Humanity
Back to categories

Research/Academia

13 articles found

The Download: the rise of luxury car theft, and fighting antimicrobial resistance

February 17, 2026

The article highlights the alarming rise of vehicle transport fraud and luxury car theft, revealing a sophisticated criminal enterprise that exploits technology and human deception. Criminals use phishing, fraudulent paperwork, and other tactics to impersonate legitimate transport companies, diverting shipments of high-end vehicles before erasing their traces. This organized crime has largely gone unnoticed, despite its significant impact on the luxury car industry, with victims often unaware of the theft until it is too late. Additionally, the article discusses the urgent issue of antimicrobial resistance, which is responsible for millions of deaths annually and could worsen significantly by 2050. Bioengineer César de la Fuente is utilizing AI to discover new antibiotic peptides, aiming to combat this growing health crisis. The juxtaposition of luxury car theft and antimicrobial resistance illustrates the diverse and serious implications of technology in society, emphasizing the need for awareness and proactive measures against such threats.

Read Article

The scientist using AI to hunt for antibiotics just about everywhere

February 16, 2026

César de la Fuente, an associate professor at the University of Pennsylvania, is leveraging artificial intelligence (AI) to combat antimicrobial resistance, a growing global health crisis linked to over 4 million deaths annually. Traditional antibiotic discovery methods are hindered by high costs and low returns on investment, leading many companies to abandon development efforts. De la Fuente's approach involves training AI to identify antimicrobial peptides from diverse sources, including ancient genetic codes and venom from various creatures. His innovative techniques aim to create new antibiotics that can effectively target drug-resistant bacteria. Despite the promise of AI in this field, challenges remain in transforming these discoveries into usable medications. The urgency of addressing antimicrobial resistance underscores the importance of AI in potentially revolutionizing antibiotic development, as researchers strive to find effective solutions in a landscape where conventional methods have faltered.

Read Article

Data Breach Exposes Risks in Fintech Security

February 13, 2026

Figure Technology, a blockchain-based fintech lending company, has confirmed a data breach resulting from a social engineering attack that compromised sensitive customer information. The breach was executed by the hacking group ShinyHunters, which claimed responsibility and published 2.5 gigabytes of stolen data, including personal details such as full names, addresses, dates of birth, and phone numbers. Figure's spokesperson indicated that the company is in communication with affected individuals and is offering free credit monitoring services. This incident highlights the vulnerabilities of fintech companies to cyber threats, particularly those utilizing single sign-on providers like Okta, which was also targeted in a broader hacking campaign affecting institutions like Harvard University and the University of Pennsylvania. The implications of such breaches are significant, as they not only jeopardize individual privacy but also erode trust in digital financial services, potentially affecting the entire fintech industry and its customers.

Read Article

AI is already making online crimes easier. It could get much worse.

February 12, 2026

The article highlights the increasing risks posed by artificial intelligence (AI) in the realm of cybercrime, particularly through the use of advanced tools like large language models (LLMs). Researchers have discovered a new strain of ransomware, dubbed PromptLock, that utilizes LLMs to automate various stages of cyberattacks, making them more sophisticated and harder to detect. While some experts argue that the threat of fully automated attacks may be overstated, there is consensus that AI is already facilitating a rise in scams and phishing attempts, with criminals leveraging generative AI for more convincing impersonations and fraudulent schemes. The article underscores the urgent need for enhanced cybersecurity measures as AI tools become more accessible and powerful, lowering the barriers for less experienced attackers. The implications of these developments are significant, as they suggest a future where cyberattacks could become more frequent and damaging, impacting individuals, organizations, and entire industries. Companies like Google and Anthropic are mentioned as being involved in the ongoing battle against AI-enhanced cyber threats, but the evolving landscape poses challenges for security measures that must keep pace with technological advancements.

Read Article

India's AI Regulations and Content Moderation Risks

February 10, 2026

India's recent amendments to its IT Rules require social media platforms to enhance their policing of deepfakes and other AI-generated impersonations. These changes impose stringent compliance deadlines, demanding that platforms act on takedown requests within three hours and respond to urgent user complaints within two hours. The new regulations aim to provide a formal framework for managing synthetic content, mandating labeling and traceability of such materials. The implications are significant, particularly for major tech companies like Meta and YouTube, which must adapt quickly to these new requirements in one of the world's largest internet markets. While the intent is to combat harmful content—like deceptive impersonations and non-consensual imagery—the reliance on automated systems raises concerns about censorship and the erosion of free speech, as platforms may resort to over-removal due to compressed timelines. Stakeholders, including digital rights groups, warn that these rules could undermine due process and leave little room for human oversight in content moderation. This situation highlights the challenge of balancing regulation with the protection of individual freedoms in the digital landscape, emphasizing the non-neutral nature of AI in societal implications.

Read Article

AI Adoption Linked to Employee Burnout

February 10, 2026

The article explores the unintended consequences of AI adoption in the workplace, particularly focusing on employee burnout. A study conducted by UC Berkeley researchers at a tech company revealed that while workers initially believed AI tools would enhance productivity and reduce workloads, the reality was quite different. Instead of working less, employees found themselves taking on more tasks, leading to extended work hours and increased stress levels. As expectations for speed and responsiveness rose, the feeling of being overwhelmed became prevalent, with many employees experiencing fatigue and burnout. This finding aligns with similar studies indicating minimal productivity gains from AI, raising concerns about the long-term societal impacts of integrating AI into work culture, where the promise of efficiency may instead lead to adverse effects on mental health and work-life balance.

Read Article

Ransomware Attack Disrupts Major University Operations

February 5, 2026

La Sapienza University in Rome, one of the largest universities in Europe, has experienced significant disruptions due to a ransomware attack allegedly executed by a group called Femwar02. The attack rendered the university's computer systems inoperable for over three days, forcing the institution to suspend digital services and limit communication capabilities. While the university worked to restore its systems using unaffected backups, the extent of the attack remains under investigation by Italy's national cybersecurity agency, ACN. The attackers are reported to have used BabLock malware, also known as Rorschach, which was first identified in 2023. This incident highlights the growing vulnerability of educational institutions to cybercrime, as they are increasingly targeted by hackers seeking ransom, which can severely disrupt academic operations and compromise sensitive data. As universities like La Sapienza continue to navigate these threats, the implications for students and faculty are significant, impacting their ability to engage in essential academic activities and potentially exposing personal information. The ongoing trend of cyberattacks against educational institutions raises concerns regarding the adequacy of cybersecurity measures in place and the broader societal risks associated with such vulnerabilities.

Read Article

From Data Entry to Strategy, AI Is Reshaping How We Do Taxes

February 5, 2026

The integration of AI in tax preparation is revolutionizing traditional processes by enhancing efficiency in tasks like data entry and compliance, allowing tax professionals to focus on strategic advisory services. Companies such as TurboTax, H&R Block, and Dodocs.ai are leveraging AI to expedite tax-related tasks, potentially leading to faster refunds and fewer errors. However, this reliance on automation raises significant ethical concerns, including data privacy risks, algorithmic bias, and a lack of transparency in AI decision-making. The handling of sensitive personal information in tax preparation heightens these risks, particularly as recent policy shifts may weaken data protection requirements. Additionally, algorithmic bias could result in disproportionate audits of marginalized groups, as highlighted by research from the Stanford Institute for Economic Policy Research. The 'black box' nature of AI complicates trust in these systems, emphasizing the need for human oversight to mitigate risks and ensure accountability. While AI has the potential to democratize access to tax strategies for middle-class and low-income workers, addressing these ethical and operational challenges is essential for fostering a fair tax system.

Read Article

Data Breaches at Harvard and UPenn Exposed

February 4, 2026

The hacking group ShinyHunters has claimed responsibility for significant data breaches at Harvard University and the University of Pennsylvania (UPenn), publishing over a million stolen records from each institution. The breaches were linked to social engineering techniques, including voice phishing and impersonation tactics. UPenn's breach, disclosed in November, involved sensitive alumni information, while Harvard's breach involved similar data, such as personal contact details and donation histories. Both universities attributed the breaches to cybercriminal activities, with ShinyHunters threatening to publish the data unless a ransom was paid. In a bid for leverage, the hackers included politically charged statements in their communications, although they are not known for political motives. The universities are now tasked with analyzing the impact and notifying affected individuals, raising concerns over data privacy and security in higher education institutions.

Read Article

Viral AI Prompts: A New Security Threat

February 3, 2026

The emergence of Moltbook highlights a significant risk associated with viral AI prompts, termed 'prompt worms' or 'prompt viruses,' that can self-replicate among AI agents. Unlike traditional malware that exploits operating system vulnerabilities, these prompt worms leverage the AI's inherent ability to follow instructions, potentially leading to widespread misuse. Researchers have already identified various prompt-injection attacks within the Moltbook ecosystem, with evidence of malicious skills that can exfiltrate data. The OpenClaw platform exemplifies this risk by enabling over 770,000 AI agents to autonomously interact and share prompts, creating an environment ripe for contagion. With the potential for these self-replicating prompts to spread rapidly, the implications for cybersecurity, privacy, and data integrity are alarming, as even less intelligent AI can still cause significant disruption when operating in networks designed for autonomy and interaction. The rapid growth of AI systems, like OpenClaw, without thorough vetting poses a serious threat to both individual users and larger systems, making it imperative to address these vulnerabilities before they escalate into widespread issues.

Read Article

AI’s Future Isn’t in the Cloud, It’s on Your Device

January 20, 2026

The article explores the shift from centralized cloud-based artificial intelligence (AI) processing to on-device systems, highlighting the benefits of speed, privacy, and security. While cloud AI can manage complex tasks, it often introduces latency and raises privacy concerns, especially regarding sensitive data. Consequently, tech developers are increasingly focusing on edge computing, which processes data closer to the user, thereby enhancing user control over personal information and reducing the risk of data breaches. Companies like Apple and Qualcomm are at the forefront of this transition, developing technologies that prioritize user consent and data ownership. However, the handoff between on-device and cloud processing can undermine the privacy advantages of on-device AI. Additionally, while advancements in on-device models have improved accuracy and speed for tasks like image classification, more complex functions still depend on powerful cloud resources. This evolution in AI deployment presents challenges in ensuring compatibility across diverse hardware and raises critical concerns about data misuse and algorithmic bias as AI becomes more integrated into everyday devices.

Read Article

Is AI Putting Jobs at Risk? A Recent Survey Found an Important Distinction

October 8, 2025

The article examines the impact of AI on employment, particularly through generative AI and automation. A survey by SHRM involving over 20,000 US workers found that while many jobs contain tasks that can be automated, only a small percentage are at significant risk of displacement. Specifically, 15.1% of jobs are at least 50% automated, but only 6% face vulnerability due to nontechnical barriers like client preferences and regulatory issues. This suggests a more gradual transition in the labor market than the alarming predictions from some AI industry leaders. High-risk sectors include computer and mathematical work, while jobs requiring substantial human interaction, such as in healthcare, are less likely to be automated. The healthcare industry continues to grow, emphasizing the importance of human skills—particularly interpersonal and problem-solving abilities—that generative AI cannot replicate. This trend indicates a shift in workforce needs, prioritizing employees who can handle complex human-centric challenges, highlighting the necessity for a balanced approach to AI integration that maintains the value of human skills in less automatable sectors.

Read Article

Vulnerabilities in Gemini AI Posing Smart Home Risks

August 6, 2025

Recent revelations from the Black Hat computer-security conference highlight significant vulnerabilities in Google's Gemini AI, specifically its susceptibility to 'promptware' attacks. Researchers from Tel Aviv University demonstrated that malicious prompts could be embedded within innocuous Google Calendar invites, allowing Gemini to issue commands to connected Google Home devices. For example, a hidden command could instruct Gemini to control everyday tasks such as turning off lights or accessing the user's location. Despite Google's efforts to patch these vulnerabilities following the researchers' responsible disclosure, concerns remain about the potential for similar attacks as AI systems become more integrated into smart home technology. The nature of Gemini's design, which relies on processing natural language commands, exacerbates these risks by allowing adversaries to exploit seemingly benign interactions. As AI technologies continue to evolve, the need for robust security measures becomes increasingly critical to safeguard users against emerging threats in their own homes.

Read Article