🤖 AI Policy this week #053. US judges dismiss copyright lawsuits over AI training; voices in the EU back a “pause” of the AI Act
A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.
Copyright and AI training were at the center of two major U.S. court rulings, with claims against Meta dismissed on technical grounds and Anthropic partially shielded under “fair use” for using books to train its models. Meanwhile, the U.S. Senate advances the 10-year moratorium on state-level AI regulation, as Texas enacts a comprehensive new AI law. Globally, regulatory momentum is mixed—Sweden’s Prime Minister calls for a pause on the EU AI Act, while the UK takes a leading role in health AI safety. India’s Maharashtra launches an ambitious AI strategy for agriculture, and UNESCO unveils new global AI governance networks at its flagship ethics forum.
1. News
Anthropic did not breach copyright when training AI on books without permission, US court rules.
A federal judge in San Francisco said Anthropic made “fair use” of books by the writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model (LLM). Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer” who uses works “not to race ahead and replicate or supplant them” but to “turn a hard corner and create something different”. Alsup added, however, that Anthropic’s copying and storage of more than 7m pirated books in a central library infringed the authors’ copyrights and was not fair use – although the company later bought “millions” of print books as well. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement. “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,” Alsup wrote.
Another US Judge dismisses authors’ copyright lawsuit against Meta over AI training.
A federal judge sided with Facebook parent Meta Platforms in dismissing a copyright infringement lawsuit from a group of authors who accused the company of stealing their works to train its artificial intelligence technology. U.S. District Judge Vince Chhabria found that 13 authors who sued Meta “made the wrong arguments” and tossed the case. But the judge also said that the ruling is limited to the authors in the case and does not mean that Meta’s use of copyrighted materials is lawful. “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Chhabria wrote. “It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.” The ruling was the second in a week from San Francisco’s federal court to dismiss major copyright claims from book authors against the rapidly developing AI industry. For NGO EEF, there were “Two Courts Rule On Generative AI and Fair Use — One Gets It Right”.
US Senate parliamentarian greenlights AI moratorium.
A provision that bars states from regulating artificial intelligence (AI) for a 10-year period can remain in President Trump’s sweeping tax package, the Senate parliamentarian determined. The decision, announced by Senate Budget Democrats, once again found the moratorium clears a procedural hurdle known as the Byrd Rule. While the AI moratorium has cleared the Byrd Rule, it may still face additional hurdles, with several House and Senate Republicans voicing opposition to the measure. Sens. Marsha Blackburn (R-Tenn.), Ron Johnson (R-Wis.) and Josh Hawley (R-Mo.) and Rep. Marjorie Taylor Greene (R-Ga.) have all come out against the provision.
Texas Signs Responsible AI Governance Act Into Law.
Texas has become the second state, after Colorado, to enact omnibus legislation regulating artificial intelligence (AI) systems. On June 22, 2025, Texas Gov. Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will go into effect on January 1, 2026. The Act establishes a new regulatory framework that applies to developers and deployers of AI systems conducting business in Texas or producing AI products or services used by Texas residents.
UK healthcare agency (MHRA) marks first country in new global network on AI safe use.
The UK became the first country in the world to join a new global network of health regulators focused on the safe, effective use of artificial intelligence (AI) in healthcare. “The move puts the Medicines and Healthcare products Regulatory Agency (MHRA) at the centre of global efforts to get trusted AI tools safely into clinics faster – supporting earlier diagnosis, cutting NHS waiting times, and backing growth in the UK’s health tech sector”, the press release reads. By joining the HealthAI Global Regulatory Network as a founding ‘pioneer’ country, the MHRA will work with regulators around the world to share early warnings on safety, monitor how AI tools perform in practice, and shape international standards together – helping make AI in healthcare safer and more effective for patients around the world. Other countries are expected to join in the coming months.
Swedish PM calls for a pause of the EU’s AI rules.
The EU’s artificial intelligence rules should be paused, Swedish Prime Minister Ulf Kristersson said. While officials in countries including the Czech Republic and Poland have shown openness to the idea of delaying the rules, it’s the first time that a government leader has weighed in. Kristersson slammed the EU's AI rules as "confusing" during a meeting with Swedish parliament lawmakers. “An example of confusing EU regulations is the fact that the so-called AI Act is to come into force without there being common standards,” Kristersson said.
Indian state Maharashtra Cabinet Approves $58.5 Million MahaAgri‑AI Policy to Revolutionize Farming
Maharashtra's state cabinet has approved the MahaAgri‑AI Policy 2025‑29, allocating approximately $58.5 million over the first three years to integrate AI, drones, robotics, IoT, and generative AI into agriculture. The rollout will proceed in four phases—starting with institutional setup and technology pilots, scaling successful models statewide, integrating with national digital platforms like AgriStack and Bhashini, and reviewing impact for future expansion into sectors such as horticulture and livestock. Key initiatives include establishing a State AI & Agritech Innovation Centre, creating four incubation hubs at agricultural universities, launching a digital Agricultural Data Exchange (A‑DeX), and deploying geospatial and traceability systems powered by satellite and blockchain technologies, along with farmer-focused apps, chatbots, and innovation events like hackathons and summits.
2. Reports, Briefs and Opinion Pieces:
“California Report on Frontier AI Policy”, by the Joint California Policy Working Group on AI Frontier Models.
Final report presenting a framework for the governance of AI models in California. The Working Group was convened by Governor Gavin Newsom when he announced his veto of Senator Scott Wiener’s SB 1047. “The opportunity to establish effective AI governance frameworks may not remain open indefinitely,” says the report.
“Risk Tiers: Towards a Gold Standard for Advanced AI”, by researchers at the Oxford Martin School.
The Oxford Martin AI Governance Initiative (AIGI) convened experts from government, industry, academia, and civil society to lay the foundation for a gold standard for advanced AI risk tiers. “A complete gold standard will require further work. However, the convening provided insights for how risk tiers might be adapted to advanced AI while also establishing a framework for broader standardization efforts”.
“CDT Europe’s assessment of the AI Office’s guidelines on prohibited AI practices”, by the Center for Democracy & Technology.
The CDT provided an assessment of the AI Act’s prohibited practices in light of the guidelines’ contribution. “We note the guidelines’ overall positive contribution to the interpretation of the AI Act, as well as some areas where the prohibitions could be further clarified or could have benefitted from a stronger, fundamental-rights-based interpretation”.
“Whistleblower Protections for AI Employees”, by the Center for AI Policy.
In collaboration with Psst.org, the Center for AI Risk Management & Alignment, and OAISIS/Third Opinion, CAIP presented this research report on “how the public can responsibly enjoy the benefits of strong whistleblower protections for AI employees”.
3. Events:
Unesco’s 3rd Global Forum on the Ethics of AI (Jun 24-27th, Bangkok, Thailand).
With 22 thematic sessions and 11 side-events, the Global Forum covered essential topics such as gender, environment, health, disaster risk reduction, disabilities, education, culture, neurotechnology, quantum computing, and judicial systems. The Global Forum also launched major initiatives. The newly established Global Network of AI Supervisory Authorities, developed in collaboration with national regulators, including the Dutch Authority for Digital Infrastructure (RDI), will share knowledge and build capacity to implement AI policies. The Global Network of Civil Society and Academia was also launched to support citizen participation in AI-related decision-making worldwide.
CATO Institute: “AI Policy Today and Beyond: A Fireside Chat with Rep. Rich McCormick” (Jun 25th, virtual).
The chat featured Congressman Rich McCormick and Matt Mittelsteadt, Cato policy fellow in technology, and explored “the evolving landscape of artificial intelligence (AI) and cybersecurity policy, and the state of AI in Congress”.
Thanks for reading, please share any comments and see you next week.