OpenAI and Anthropic are reportedly weighing the option of using investor money to cover potential multibillion-dollar copyright settlements.
A Financial Times report revealed that both companies are exploring alternative ways to handle the risks associated with how their AI models were trained. Copyright owners, including authors, publishers, and media houses, have filed more than a dozen lawsuits against tech companies including OpenAI, Microsoft, Meta, and Anthropic, accusing them of using protected works without authorisation to train their large language models.
To manage these legal threats, OpenAI has reportedly partnered with Aon, one of the world’s leading insurance firms, to secure coverage worth up to $300 million for emerging AI-related risks. However, some sources told the Financial Times that the actual figure could be lower, and regardless, it still falls far short of what would be required to cover the potential damages from ongoing lawsuits.
Kevin Kalinich, Aon’s Global Cyber Risk Head, explained that the insurance industry itself is finding it difficult to match the scale of risk caused by AI model providers. “The insurance sector broadly lacks enough capacity for (model) providers,” he said.
Because of this gap, OpenAI is reportedly considering “self-insurance”, essentially setting aside investor capital in a protected pool to absorb possible legal costs. Discussions have also surfaced about creating a “captive,” an internal insurance structure used by large firms to manage risks that the traditional market cannot handle.
Anthropic appears to be taking a similar route. According to the Financial Times, the company is using part of its own funds to cover a $1.5 billion settlement that was preliminarily approved by a California federal judge last month.
The case was filed by a group of authors who alleged that their works were used to train Anthropic’s AI system, Claude, without consent.
The number of copyright claims is forcing AI companies—and their backers—to confront questions about financial accountability and transparency. If investor funds are being used to offset legal risks, governance issues inevitably follow: who decides how much to reserve for potential liabilities, and how are investors’ interests safeguarded?
Analysts believe these developments could change how AI startups raise and allocate capital. Investors may soon demand clearer disclosures on data sources, litigation exposure, and risk management frameworks before funding new ventures.
Meanwhile, the U.S. Copyright Office is still assessing whether training AI systems on copyrighted content amounts to infringement, while the European Union’s AI Act could compel firms to reveal their training datasets, opening another front of legal vulnerability for AI developers.
Neither OpenAI, Anthropic, nor Aon has commented on the report.