AI Against Humanity
Back to categories

Cybercrime/Malicious

Explore articles and analysis covering Cybercrime/Malicious in the context of AI's impact on humanity.

Articles

Spyware Maker Sentenced, Avoids Jail Time

April 6, 2026

Bryan Fleming, the founder of the spyware company pcTattletale, has been sentenced to time served and a $5,000 fine after pleading guilty to federal charges related to his illegal surveillance operations. This marks the first successful prosecution of a spyware maker by the U.S. Department of Justice in nearly a decade. Fleming's company was known for creating 'stalkerware' that allowed users to secretly monitor the devices of others without their consent. Investigations revealed that pcTattletale had significant security flaws, leading to a data breach that exposed sensitive information from numerous victims. Despite the severity of the crimes, Fleming avoided jail time, raising concerns about the accountability of spyware developers and the broader implications for privacy and security in the digital age. The case highlights the urgent need for stricter regulations and enforcement against illegal surveillance technologies, especially as the spyware industry continues to thrive in a largely unregulated environment.

Read Article

Grammarly’s sloppelganger saga

April 5, 2026

Grammarly, recently rebranded as Superhuman, faced backlash for its 'Expert Review' feature, which used the names of renowned experts to generate writing suggestions without their consent. The feature, which aimed to provide insights from professionals, included names like Stephen King and Neil deGrasse Tyson, leading to confusion and outrage when it was discovered that it also used the names of living journalists without permission. Critics highlighted that the suggestions were often generic and did not accurately represent the experts' views. Following public outcry and a class action lawsuit filed by journalist Julia Angwin for privacy violations, Superhuman decided to disable the feature. This incident underscores the extractive nature of AI, raising concerns about consent, representation, and the ethical implications of using individuals' likenesses without proper authorization. The situation reflects broader societal anxieties regarding AI's impact on intellectual property and personal rights, emphasizing the need for clearer regulations and ethical standards in AI deployment.

Read Article

Cloudflare appeals Piracy Shield fine, hopes to kill Italy's site-blocking law

March 18, 2026

Cloudflare is appealing a hefty 14.2 million euro fine imposed by Italy's communications regulator, AGCOM, for non-compliance with the Piracy Shield law. This law requires the rapid blocking of websites accused of copyright infringement within 30 minutes, a process Cloudflare argues undermines the broader Internet ecosystem by favoring large rightsholders at the expense of public access. The company contends that the law's implementation would necessitate a filtering system that could degrade its DNS service performance globally. Additionally, Cloudflare criticizes the law for lacking transparency and due process, leading to potential overblocking of legitimate sites without judicial oversight. The company claims the fine is disproportionately based on its global revenue rather than its Italian earnings and argues that the law violates EU regulations, particularly the Digital Services Act, which mandates proportionate content restrictions. As Cloudflare seeks EU intervention, concerns about unchecked censorship and the implications of AI-driven content moderation systems continue to grow, highlighting the risks associated with such regulations beyond Italy's borders.

Read Article

Grammarly Faces Lawsuit Over AI Feedback Feature

March 12, 2026

Grammarly's recent launch of the 'Expert Review' feature, which uses AI to simulate feedback from well-known authors without their consent, has sparked controversy and legal action. Journalist Julia Angwin has filed a class action lawsuit against Superhuman, Grammarly's parent company, claiming that the feature violates privacy and publicity rights by impersonating her and other writers. Critics, including AI ethicist Timnit Gebru, have raised concerns about the ethical implications of using individuals' likenesses and expertise without permission, especially when the AI-generated feedback is generic and lacks substance. The backlash led to Grammarly disabling the feature, although Superhuman's CEO defended the concept, suggesting it could foster connections between users and experts. This incident highlights the risks of AI technologies in misappropriating personal identities and expertise, raising questions about consent and the quality of AI-generated content.

Read Article

Grammarly's AI Feature Sparks Legal Controversy

March 11, 2026

Grammarly, a writing assistance tool developed by Superhuman, is currently facing a class action lawsuit due to its AI feature known as 'Expert Review.' This feature provided users with editing suggestions that were falsely attributed to established authors and academics without their consent. The lawsuit highlights significant ethical concerns surrounding the use of AI in content creation, particularly regarding consent and intellectual property rights. By misrepresenting the source of these suggestions, Grammarly not only risks legal repercussions but also undermines the trust of its user base and the integrity of the authors involved. The company has since shut down the feature, but the incident raises broader questions about the implications of AI technologies in creative fields and the potential for misuse that can harm individuals and communities. As AI systems become more integrated into everyday applications, the need for clear ethical guidelines and accountability becomes increasingly urgent to prevent similar issues in the future.

Read Article

Grammarly Faces Lawsuit Over Identity Theft

March 11, 2026

Grammarly is facing a class-action lawsuit filed by journalist Julia Angwin, who claims the company unlawfully used her identity in its 'Expert Review' AI feature without her consent. This feature, which was designed to provide AI-generated editing suggestions by mimicking the insights of real experts, has drawn criticism for violating privacy and publicity rights. Angwin discovered her likeness was used when another journalist revealed the issue, prompting her to take legal action against Grammarly. In response to the backlash, Grammarly's CEO acknowledged the misstep and announced the discontinuation of the feature, stating that the company would rethink its approach moving forward. This incident raises significant concerns about the ethical implications of AI technologies that exploit individuals' identities for commercial gain without permission, highlighting the need for stricter regulations and ethical standards in AI deployment.

Read Article

Grammarly says it will stop using AI to clone experts without permission

March 11, 2026

Grammarly recently announced it will discontinue its 'Expert Review' AI feature, which had drawn criticism for misrepresenting the voices of real experts without their consent. The feature, launched in August, utilized publicly available information to generate writing suggestions based on the work of influential figures. Following backlash from experts who felt their identities were being exploited, Superhuman, the company behind the feature, acknowledged the concerns and committed to rethinking its approach. The decision to disable the feature reflects a growing awareness of the ethical implications of AI technologies, particularly regarding consent and representation. Moving forward, Superhuman aims to ensure that experts have control over how their knowledge is utilized and represented in AI applications, emphasizing the importance of collaboration and ethical standards in AI development.

Read Article

Grammarly will keep using authors’ identities without permission unless they opt out

March 10, 2026

Grammarly's new feature, 'Expert Review,' has sparked controversy as it utilizes the names of authors without their consent, presenting AI-generated suggestions as credible insights. The company faced backlash after it was revealed that many prominent authors were unknowingly included in this feature, which leverages their identities to enhance the perceived authority of its AI outputs. In response to the criticism, Grammarly announced that authors could opt out of this feature by emailing the company, but did not offer an apology or indicate any intention to change the underlying practice. Critics argue that this approach is inadequate, as it places the onus on authors to protect their names rather than ensuring their consent is obtained beforehand. The situation raises significant concerns about identity appropriation and the ethical implications of AI technologies that leverage personal identities without permission, highlighting the need for stricter regulations and ethical standards in AI deployment.

Read Article

An iPhone-hacking toolkit used by Russian spies likely came from U.S military contractor

March 10, 2026

A sophisticated hacking toolkit known as 'Coruna,' developed by U.S. military contractor L3Harris, has been linked to cyberattacks targeting iPhone users in Ukraine and China, after falling into the hands of Russian government hackers and Chinese cybercriminals. Initially designed for Western intelligence operations, Coruna comprises 23 components and was first deployed by an unnamed government customer. Researchers from iVerify suggest it was built for the U.S. government, with former L3Harris employees confirming its origins in the company's Trenchant division. The case of Peter Williams, a former general manager at Trenchant, further illustrates the risks; he was sentenced to seven years in prison for selling hacking tools to a Russian company for $1.3 million, which were subsequently used by a Russian espionage group to compromise iPhone users. This situation raises significant concerns about the security of surveillance technologies and the unintended consequences of their proliferation, highlighting the ethical dilemmas faced by defense contractors and the need for stringent oversight to prevent advanced hacking tools from being misused by malicious actors.

Read Article

Grammarly's Misleading Expert Review Feature

March 7, 2026

Grammarly's new feature, Expert Review, claims to enhance users' writing by providing feedback inspired by renowned authors and journalists. However, the feature has drawn criticism for misleadingly implying that these experts are involved in the review process, when in fact, they are not. The feedback is generated based on publicly available works of these individuals without their consent or endorsement. This raises ethical concerns about the authenticity of the advice provided and the potential for misinformation, as users may mistakenly believe they are receiving expert guidance. The lack of actual expert involvement undermines the credibility of the feature and highlights broader issues regarding the transparency and accountability of AI systems in content creation. As AI technologies like Grammarly continue to integrate into everyday tools, the implications of such practices could affect users' trust in AI-generated content and the overall quality of information disseminated online.

Read Article

Grammarly is using our identities without permission

March 6, 2026

Grammarly's new 'Expert Review' feature has raised significant ethical concerns by using the identities of various subject matter experts without their consent. The feature claims to provide writing advice inspired by well-known figures, including deceased professors and current professionals, but many of those named, including editors from The Verge, were unaware of their inclusion. This has led to inaccuracies in the descriptions of these experts, as their outdated job titles were used without permission. Additionally, the AI-generated suggestions often misrepresent the experts' actual views and editing styles, potentially misleading users. The feature has also faced technical issues, such as linking to unreliable sources, further complicating the integrity of the advice provided. The situation highlights the risks of AI systems misappropriating identities and the potential for misinformation, raising questions about consent and accuracy in AI-generated content.

Read Article

Ethical Concerns of AI in Literary Feedback

March 4, 2026

Grammarly, now under the rebranded company Superhuman, has launched a new feature that provides AI-generated writing feedback based on the styles of both living and deceased authors. This tool raises significant ethical concerns as it utilizes the works of these authors without obtaining their permission, effectively commodifying their intellectual property. The implications of this technology extend beyond mere copyright infringement; it challenges the boundaries of authorship and originality in the digital age. By simulating feedback from renowned figures, the tool risks misleading users into believing they are receiving authentic critiques, which could undermine the value of genuine literary mentorship. Furthermore, this practice may set a precedent for the exploitation of creative works, prompting a broader discussion about the rights of authors and the responsibilities of AI developers. As AI systems continue to evolve, the potential for misuse and ethical dilemmas becomes increasingly pronounced, highlighting the need for stricter regulations and ethical guidelines in AI deployment.

Read Article

Inside the story of the US defense contractor who leaked hacking tools to Russia

February 25, 2026

Peter Williams, a former executive at L3Harris, has been sentenced to 87 months in prison for selling sensitive hacking tools to a Russian firm, Operation Zero, which is believed to collaborate with the Russian government. Exploiting his access to L3Harris's secure networks, Williams downloaded and sold trade secrets, including zero-day exploits, for $1.3 million in cryptocurrency. These tools pose a significant threat, potentially compromising millions of devices globally, including popular software like Android and iOS. The U.S. Treasury has sanctioned Operation Zero, labeling it a national security threat. This incident underscores the vulnerabilities within the defense sector and the risks of insider threats, as advanced hacking tools can fall into the hands of adversaries, including foreign intelligence services and ransomware gangs. Additionally, the case raises concerns about the responsibilities of companies like L3Harris in safeguarding sensitive information and the broader implications for cybersecurity and public trust in institutions. The involvement of the FBI in related investigations further highlights the ethical considerations surrounding the use of surveillance technologies and their potential for abuse.

Read Article

CarGurus Data Breach Exposes Millions of Accounts

February 24, 2026

CarGurus, an online automotive marketplace, recently suffered a significant data breach affecting 12.5 million customer accounts. The breach, reported by the data-breach notification site Have I Been Pwned, involved the theft of sensitive information including names, email addresses, phone numbers, and physical addresses. The ShinyHunters hacking group, known for their social engineering tactics, is believed to be responsible for this breach. This incident highlights the vulnerabilities in cybersecurity within the automotive industry and raises concerns about the handling of personal data by companies. With the increasing reliance on digital platforms for transactions, the risks associated with data breaches pose serious implications for consumer trust and privacy. This breach follows another incident involving CarMax, which underscores a troubling trend of data security failures in the automotive sector. The stolen data could potentially be used for identity theft or phishing attacks, putting millions of individuals at risk. As the digital landscape evolves, the need for robust cybersecurity measures becomes paramount to protect consumer information and maintain confidence in online services.

Read Article

Treasury sanctions Russian zero-day broker accused of buying exploits stolen from US defense contractor

February 24, 2026

The U.S. Treasury has sanctioned Operation Zero, a Russian company involved in acquiring and reselling zero-day exploits—security vulnerabilities unknown to developers that can be exploited maliciously. The sanctions come in response to reports that the company offered up to $20 million for vulnerabilities in widely used devices like Android and iPhones, raising alarms about potential ransomware attacks. The Treasury also targeted Operation Zero's founder, Sergey Zelenyuk, for allegedly selling exploits to foreign intelligence agencies and developing spyware technologies. Additionally, sanctions were imposed on the UAE-based affiliate Special Technology Services and several individuals linked to Operation Zero, citing significant thefts of trade secrets and connections to ransomware gangs. This action reflects ongoing investigations into the unauthorized sale of U.S. government cyber tools, emphasizing the national security risks posed by zero-day brokers and the broader implications for global cybersecurity and defense systems. The sanctions aim to deter such activities and protect sensitive information from exploitation by malicious actors.

Read Article

Cybersecurity Risks from Insider Threats

February 24, 2026

Peter Williams, the former general manager of L3Harris Trenchant, was sentenced to seven years in prison for selling hacking tools and trade secrets to a Russian broker, Operation Zero. These tools, known as zero-days, are vulnerabilities in software that can be exploited for unauthorized access. The U.S. Department of Justice revealed that the tools sold could potentially compromise millions of devices worldwide. Williams, who made $1.3 million from these sales, had previously worked for an Australian spy agency, raising concerns about the implications of insider threats in cybersecurity. The case highlights the risks associated with the commercialization of hacking tools and the potential for these technologies to be used against national security interests. The U.S. Treasury Department has since sanctioned Operation Zero, which is known for reselling such exploits to the Russian government and local firms, further complicating the geopolitical landscape of cybersecurity and technology transfer.

Read Article

Fintech Data Breach Exposes Customer Information

February 18, 2026

A significant data breach at the fintech company Figure has compromised the personal information of nearly one million customers. The breach, confirmed by Figure, involved the unauthorized access and theft of sensitive data, including names, email addresses, dates of birth, physical addresses, and phone numbers. Security researcher Troy Hunt analyzed the leaked data and reported that it contained 967,200 unique email addresses linked to Figure customers. The cybercrime group ShinyHunters claimed responsibility for the attack, publishing 2.5 gigabytes of the stolen data on their leak website. This incident raises concerns about the security measures in place at fintech companies and the potential risks associated with the increasing reliance on digital financial services. Customers whose data has been compromised face risks such as identity theft and fraud, highlighting the urgent need for stronger cybersecurity protocols in the fintech industry. The implications of such breaches extend beyond individual customers, affecting trust in digital financial systems and potentially leading to regulatory scrutiny of companies like Figure. As the use of AI and digital platforms grows, understanding the vulnerabilities that accompany these technologies is crucial for safeguarding personal information and maintaining public confidence in financial institutions.

Read Article

Data Breach Exposes Risks in Fintech Security

February 13, 2026

Figure Technology, a blockchain-based fintech lending company, has confirmed a data breach resulting from a social engineering attack that compromised sensitive customer information. The breach was executed by the hacking group ShinyHunters, which claimed responsibility and published 2.5 gigabytes of stolen data, including personal details such as full names, addresses, dates of birth, and phone numbers. Figure's spokesperson indicated that the company is in communication with affected individuals and is offering free credit monitoring services. This incident highlights the vulnerabilities of fintech companies to cyber threats, particularly those utilizing single sign-on providers like Okta, which was also targeted in a broader hacking campaign affecting institutions like Harvard University and the University of Pennsylvania. The implications of such breaches are significant, as they not only jeopardize individual privacy but also erode trust in digital financial services, potentially affecting the entire fintech industry and its customers.

Read Article

Risks of Stalkerware: Privacy and Safety Concerns

February 9, 2026

The proliferation of stalkerware applications, designed to enable users to monitor and spy on their partners, raises significant concerns about privacy and safety. These apps, which are marketed to those with jealous tendencies, have been linked to numerous data breaches, exposing sensitive personal information of both users and victims. Over the years, at least 27 stalkerware companies have experienced hacks, leading to the public release of customer data, including payment information and private communications. Notable incidents include the recent breach of uMobix, which compromised over 500,000 customers, and earlier breaches of other companies like mSpy and Retina-X, which have shown a troubling pattern of negligence in protecting user data. Despite the serious implications of stalking and abuse associated with these apps, they continue to operate with minimal regulation, making them a risk not just to individual victims but to broader societal safety. The ongoing targeting of these companies by hacktivists highlights both the ethical concerns surrounding stalkerware and the vulnerabilities inherent in their operations. Given that many of these companies prioritize profit over user safety and data security, the risks associated with stalkerware extend beyond privacy violations to potential real-world harm for unsuspecting victims.

Read Article

Conduent Data Breach Affects Millions Nationwide

February 5, 2026

A significant data breach at Conduent, a major government technology contractor, has potentially impacted over 15.4 million individuals in Texas and 10.5 million in Oregon, highlighting the extensive risks associated with the deployment of AI systems in public service sectors. Initially reported to affect only 4 million people, the scale of the breach has dramatically increased, as Conduent handles sensitive information for various government programs and corporations. The stolen data includes names, Social Security numbers, medical records, and health insurance information, raising serious privacy concerns. Conduent's slow response, including vague statements and delayed notifications, exacerbates the situation, with the company stating that it will take until early 2026 to notify all affected individuals. The breach, claimed by the Safeway ransomware gang, underscores the vulnerability of AI-driven systems in managing critical data, as well as the potential for misuse by malicious actors. The implications are profound, affecting millions of Americans' privacy and trust in government technology services, and spotlighting the urgent need for enhanced cybersecurity measures and accountability in AI applications.

Read Article