AI Against Humanity
Back to categories

Financial Services

Explore articles and analysis covering Financial Services in the context of AI's impact on humanity.

Articles

A new Anthropic model found security problems ‘in every major operating system and web browser’

April 7, 2026

Anthropic has introduced a new AI model, Project Glasswing, aimed at enhancing cybersecurity by identifying vulnerabilities in major operating systems and web browsers. This model, which operates with minimal human intervention, has flagged thousands of high-severity vulnerabilities, raising concerns about its autonomous capabilities. The model is being made available to select partners, including major tech companies and financial institutions, to help them patch security flaws. However, the lack of human oversight in its operations poses significant risks, as it autonomously develops exploits related to the vulnerabilities it identifies. This raises ethical questions about the deployment of such powerful AI systems without adequate safeguards and the potential for misuse by adversaries. The article highlights the need for careful consideration of AI's role in cybersecurity and the implications of its autonomous functionalities, especially given the ongoing discussions between Anthropic and U.S. government officials regarding the model's capabilities.

Read Article

Musk's Grok Subscription Mandate Raises Concerns

April 3, 2026

Elon Musk is requiring banks and other firms involved in SpaceX's initial public offering (IPO) to purchase subscriptions to Grok, his AI chatbot service. Reports indicate that some banks have agreed to spend tens of millions on Grok, which is integrated into their IT systems. The IPO, expected to raise over $50 billion and potentially become the largest in history, has led to significant financial incentives for the banks involved, who could earn substantial fees from the deal. However, Grok's association with SpaceX raises concerns due to ongoing investigations into the chatbot's generation of inappropriate content, including child sexual abuse material. This situation illustrates the intertwining of financial interests and ethical considerations in AI deployment, highlighting the potential risks of AI systems when they are not adequately regulated or monitored. The implications of Musk's insistence on Grok subscriptions reflect broader issues regarding the influence of powerful individuals on technology and the ethical responsibilities of companies deploying AI systems.

Read Article

Inside the stealthy startup that pitched brainless human clones

March 30, 2026

R3 Bio, a stealth startup based in Richmond, California, has unveiled plans to create nonsentient monkey 'organ sacks' as an alternative to animal testing, raising ethical concerns about their broader ambitions. The founder, John Schloendorn, has proposed the controversial idea of producing 'brainless clones' for organ harvesting, suggesting that these clones would serve as backup bodies for humans needing transplants. This concept, inspired by medical conditions that result in minimal brain function, has sparked alarm among scientists and ethicists who question the morality and safety of such endeavors. Despite R3's claims of focusing solely on animal models, their discussions at high-profile longevity conferences hint at a more radical agenda involving human cloning. The implications of these technologies pose significant ethical dilemmas, particularly regarding the treatment of clones and the potential for exploitation by wealthy individuals or authoritarian regimes. The article emphasizes the need for public discourse and ethical boundaries in biotechnology, especially as advancements in cloning and organ replacement technologies progress.

Read Article

Concerns Over AI in Military Applications

March 26, 2026

Shield AI, a defense startup specializing in autonomous military aircraft, has achieved a valuation of $12.7 billion following a significant $1.5 billion Series G funding round. This funding was led by Advent International and included investments from JPMorgan Chase and Blackstone. The surge in valuation, a remarkable 140% increase from the previous year, is attributed to the selection of Shield AI's Hivemind autonomy software for the U.S. Air Force's Collaborative Combat Aircraft drone prototype program. This move reflects a strategic decision by the Air Force to avoid dependency on a single vendor, as Shield AI's software will be integrated with Anduril's competing Lattice software for the Fury autonomous fighter jet. The implications of such advancements in military AI technology raise concerns about the ethical ramifications and potential risks associated with deploying autonomous systems in warfare, including accountability for actions taken by AI and the potential for escalation in conflicts. As military applications of AI expand, it is crucial to consider the societal impacts and the ethical frameworks guiding their use in combat scenarios.

Read Article

Concerns Over PCAST's Non-Scientific Appointments

March 25, 2026

The article discusses the recent staffing of the President’s Council of Advisors on Science and Technology (PCAST) under the Trump administration, highlighting a significant lack of scientists among its members. Instead, the council is predominantly filled with wealthy technology figures, raising concerns about its capability to address fundamental scientific research and its implications for technology development. The focus appears to be more on commercial technologies rather than on the critical analysis of emerging scientific issues, which could hinder the council's effectiveness in guiding policy related to science and technology. The absence of academic researchers on the council suggests a potential neglect of essential scientific insights, which could have far-reaching consequences for innovation and the American workforce. This shift in focus reflects a broader trend of prioritizing commercial interests over foundational research, potentially impacting the integrity and direction of technological advancements in society.

Read Article

Arc expands into electric commercial and defense boats with $50M raise

March 19, 2026

Arc Boat Company, a Los Angeles startup, has raised $50 million in a Series C funding round to expand into the commercial and defense sectors. The funding comes from prominent investors such as Eclipse, a16z, and Menlo Ventures. Founder Mitch Lee aims to electrify marine propulsion systems, drawing inspiration from Tesla's approach of establishing a strong consumer base before venturing into commercial applications. Lee believes the entire boating industry will transition to electric systems, driven by decreasing costs of electric technologies and increasing expenses associated with combustion engines, which face compliance and environmental challenges. With a growing workforce of around 200 employees, many of whom have backgrounds at companies like SpaceX and Tesla, Arc is poised for rapid innovation. The company plans to focus on designing propulsion systems tailored to customer needs rather than building entire boats. As it explores autonomous vessels, Arc recognizes the importance of reliability and safety, emphasizing the need for rigorous testing and regulatory oversight to ensure operational efficiency and mitigate risks associated with AI deployment in maritime contexts.

Read Article

World's New Tool for AI Shopping Verification

March 17, 2026

World, co-founded by Sam Altman, has launched a new verification tool called AgentKit to address the growing concerns surrounding 'agentic commerce,' where AI programs make purchases on behalf of users. This trend, while offering convenience, raises significant risks of fraud and internet abuse as more consumers rely on AI agents for online shopping. AgentKit integrates with World ID, which is derived from biometric data, specifically iris scans, to ensure that a verified human is behind each transaction made by an AI agent. This system aims to enhance trust in automated transactions, especially as major companies like Amazon and Mastercard adopt similar technologies. However, the reliance on biometric verification also raises privacy concerns, highlighting the complex ethical implications of deploying AI in commercial settings. As the industry evolves, the need for robust safeguards becomes increasingly critical to prevent misuse and maintain consumer confidence in AI-driven commerce.

Read Article

World ID: Unique Identity for AI Agents

March 17, 2026

The article discusses the launch of World ID by the identity startup World, which aims to create a unique online identity for AI agents through iris scanning technology. This initiative follows the company's previous venture, WorldCoin, and seeks to mitigate issues caused by automated agents overwhelming online systems, a phenomenon known as Sybil attacks. By using the Agent Kit, World proposes that AI agents can prove their authenticity and represent actual humans, allowing them to access online resources without flooding systems with requests. However, the success of this system hinges on widespread adoption of iris scans, which presents a significant challenge. The article highlights the potential risks of AI misuse and the complexity of establishing trust in online interactions, emphasizing the need for secure identity verification in an increasingly automated world.

Read Article

Meta's Major Stake in AMD's AI Chips

February 24, 2026

Meta has entered into a multi-billion dollar deal with AMD to acquire customized chips with a total capacity of 6 gigawatts, potentially resulting in Meta owning a 10% stake in AMD. This arrangement is part of Meta's strategy to enhance its AI capabilities, as the company plans to nearly double its AI infrastructure spending to $135 billion this year. The chips will primarily be used for inference workloads, which involve running AI models after they have been trained. The deal is indicative of a growing trend in the tech industry where companies are engaging in circular financing arrangements to support massive AI infrastructure build-outs. This trend raises concerns about the sustainability and financial implications of such funding strategies, particularly as tech giants like Meta face pressure to tap into bond and equity markets to fund their ambitious infrastructure plans. The power requirements for the chips are substantial, equivalent to the annual energy consumption of 5 million US households, highlighting the environmental impact of scaling AI technologies. As Meta and AMD solidify their partnership, the implications of this deal extend beyond financial interests, potentially influencing the future landscape of AI development and deployment.

Read Article

As AI data centers hit power limits, Peak XV backs Indian startup C2i to fix the bottleneck

February 16, 2026

As the demand for AI data centers surges, energy consumption has become a critical limiting factor, prompting investments in innovative solutions to enhance efficiency. C2i Semiconductors, an Indian startup, has secured $15 million in funding from Peak XV Partners, Yali Deeptech, and TDK Ventures to develop advanced power solutions aimed at reducing energy losses in data centers. Current estimates suggest that electricity consumption from data centers could nearly triple by 2035, with power demand expected to rise significantly due to inefficient energy conversion processes. C2i's technology aims to minimize energy waste by integrating power conversion and control into a single system, potentially saving substantial amounts of energy and reducing operational costs for data centers. This investment highlights the growing importance of energy efficiency in AI infrastructure, as companies seek to balance the high costs associated with energy consumption and the need for scalable AI solutions. The implications of these developments extend beyond economic factors, as the environmental impact of increased energy demand raises concerns about sustainability and the carbon footprint of AI technologies.

Read Article

AI-Generated Dossiers Raise Ethical Concerns

February 14, 2026

The article discusses the launch of Jikipedia, a platform that transforms the contents of Jeffrey Epstein's emails into detailed dossiers about his associates. These AI-generated entries include information about the individuals' connections to Epstein, their alleged knowledge of his crimes, and the properties he owned. While the platform aims to provide a comprehensive overview, it raises concerns about the potential for inaccuracies in the AI-generated content, which could misinform users and distort public perception. The reliance on AI for such sensitive information underscores the risks associated with deploying AI systems in contexts that involve significant ethical and legal implications. The use of AI in this manner highlights the broader issue of accountability and the potential for harm when technology is not carefully regulated, particularly in cases involving criminal activities and high-profile individuals. As the platform plans to implement user reporting for inaccuracies, the effectiveness of such measures remains to be seen, emphasizing the need for critical scrutiny of AI applications in journalism and public information dissemination.

Read Article

AI's Role in Reshaping Energy Markets

February 10, 2026

Tem, a London-based startup, has raised $75 million in a Series B funding round to revolutionize electricity markets through AI technology. The company has developed an energy transaction engine called Rosso, which uses machine learning algorithms to match electricity suppliers with consumers directly, thereby reducing costs by cutting out intermediaries. Tem's focus on renewable energy sources and small businesses has attracted over 2,600 customers in the UK, including well-known brands like Boohoo Group and Fever-Tree. While the AI-driven approach promises to lower energy prices and improve market efficiency, concerns remain regarding the potential for monopolistic practices and the impact of AI on employment within the energy sector. As Tem plans to expand into Australia and the U.S., the implications of their AI system on existing energy markets and labor dynamics must be closely monitored. The startup's dual business model, which includes the neo-utility RED, aims to showcase the benefits of their technology while ensuring that no single entity controls a large portion of the market to prevent monopolistic tendencies. This raises questions about the balance between innovation and the need for regulation in AI-driven industries.

Read Article

AI's Hidden Impact on Job Losses in NY

February 9, 2026

In New York, over 160 companies, including major players like Amazon and Goldman Sachs, have reported mass layoffs since March without attributing these job losses to technological innovation or automation, despite a state requirement for such disclosures. This lack of transparency raises concerns about the true impact of AI and automation on employment, as companies continue to adopt these technologies while avoiding accountability for their effects on the workforce. The implications of this trend highlight the challenges faced by workers who may be unjustly affected by AI-driven decisions without adequate support or recognition. By not acknowledging the role of AI in job cuts, these companies create a veil of ambiguity, making it difficult for policymakers to understand the full extent of AI's economic repercussions and to formulate appropriate responses. The absence of disclosure not only complicates the landscape for affected workers but also obscures the broader societal impacts of AI integration into the labor market.

Read Article

Sapiom's $15M Boost for Autonomous AI Transactions

February 5, 2026

Sapiom, a San Francisco startup founded by former Shopify director Ilan Zerbib, has raised $15 million to develop a financial layer that enables AI agents to autonomously purchase software services and APIs. This innovation aims to streamline the back-end processes involved in AI operations, allowing non-technical users to create apps with minimal infrastructure knowledge. Sapiom's technology will facilitate seamless transactions between AI agents and external services like Twilio, effectively allowing these agents to handle financial decisions without human intervention. Notable investors participating in this funding round include Accel, Okta Ventures, Gradient Ventures, and Anthropic. While the focus is currently on B2B solutions, there are implications that this technology could extend to personal AI agents in the future, potentially allowing individuals to trust AI with their financial transactions. This raises concerns about the autonomy of AI systems in making independent financial decisions, which could lead to unforeseen consequences for users and industries alike.

Read Article

Risks of AI Agent Management Platforms

February 5, 2026

OpenAI has introduced Frontier, a platform aimed at helping enterprises manage AI agents, which are becoming increasingly integral to business operations. This end-to-end platform allows users to program AI agents to interact with external data and applications, enabling them to perform tasks beyond OpenAI's own capabilities. While Frontier is designed to function similarly to employee management systems, including onboarding processes and feedback loops, it raises concerns about AI's impact on workforce dynamics and accountability. Major companies such as HP, Oracle, State Farm, and Uber are among the initial clients, highlighting the growing reliance on AI in enterprise settings. The emergence of agent management platforms signifies a shift in how businesses will operate, but it also raises questions about data privacy, job displacement, and the ethical implications of AI decision-making. As the technology evolves, understanding its societal impacts becomes essential, particularly as enterprises adopt AI systems without fully grasping the potential risks they entail.

Read Article