🤖 AI Policy this week #054. EU’s Code of Practice released; New Zealand’s 1st AI Strategy
A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.
Some steps back compared to the previous edition: the U.S. Senate ultimately dropped the proposed 10-year moratorium from the approved "Big Beautiful Bill," and the EU released a voluntary Code of Practice for GPAI models—despite growing calls for a pause. "Frontier AI" remains a hot topic, with Anthropic proposing a new transparency framework. Meanwhile, New Zealand launched its national AI strategy, focused on boosting productivity and driving economic growth.
1. News
The EU released the final version of the voluntary Code of Practice for providers of general-purpose AI (GPAI) models.
The measure is intended to act as a preparatory framework for businesses ahead of the formal enforcement of the AI Act’s GPAI provisions from 2026. The code, developed by a group of 13 independent experts, focuses on three main areas: transparency requirements, copyright protection, and safety and security provisions for advanced AI models. While non-binding, the European Commission has stated that signatories to the code will benefit from legal certainty during the transition period. “In the following weeks, Member States and the Commission will assess its adequacy. Additionally, the code will be complemented by Commission guidelines on key concepts related to general-purpose AI models, to be published still in July”, the EU Commission press release reads. For the CCIA it “still imposes a disproportionate burden on AI providers”.
French President Macron calls for a UK-France AI alliance to catch up with the US and China.
Macron said the neighbouring nations "are lagging behind both the US and China... And the big question for all of us is how to be part of the competition and indeed to de-risk our model and not to be dependent on US and or Chinese solutions". The president added: "The partnership between the UK and France is for me critical, because we... face the same challenges." He said "having closer links is the best way to fix... critical issues on research, science and AI".
US Senate Drops Proposed Moratorium on State AI Laws in Budget Vote.
The United States Senate voted 99-1 to pass an amendment to the budget bill removing the proposed 10-year moratorium on the enforcement of state laws on artificial intelligence. The provision in the Senate bill would have effectively prevented states from enforcing many proposed and existing AI-related laws for the next decade. “State legislatures all across the country have done critical bipartisan work to protect the American people from some of the most dangerous harms of AI technology,” Ilana Beller, democracy organizing manager at the progressive consumer advocacy group Public Citizen said in a statement.
Over 60 organizations sign White House pledge to invest in AI education.
Some 67 tech companies and associations have signed a pledge supporting the Trump administration’s goal of making artificial intelligence education accessible to all students, the White House announced. Each signee, according to the pledge, promised to “provide resources that foster early interest in AI technology, promote AI literacy, and enable comprehensive AI training for educators.” Companies that signed the pledge, which include Google, IBM, MagicSchool, Meta, Microsoft, NVIDIA and Varsity Tutors, are expected to release more detailed plans on their commitments throughout the coming days.
Ohio schools must set AI policies by mid-2026.
All of Ohio's K-12 public schools must adopt a policy on the appropriate use of artificial intelligence by next July, per a mandate in the new state budget. The Ohio Department of Education and Workforce will first develop a model policy by Dec. 31, which districts can draw inspiration from or adopt outright, the budget states.
New Zealand unveils its 1st national artificial intelligence strategy.
New Zealand unveiled its first national artificial intelligence (AI) strategy, aiming to boost productivity and drive economic growth, state media reported. Science, Innovation and Technology Minister Shane Reti announced the much-awaited initiative, highlighting that AI could contribute up to $45.76 billion to the country's New GDP by 2038, Radio New Zealand reported. Titled “Investing with Confidence”, the plan has been met with enthusiasm from the business sector, but concern from critics who say it sets a "dangerous path forward" and is "worryingly light" on ethical considerations.
YouTube to update monetization policies as it battles AI content farms.
A preliminary update to the program's monetization policies reads: “In order to monetize as part of the YouTube Partner Program (YPP), YouTube has always required creators to upload “original” and "authentic" content. On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content. This update better reflects what “inauthentic” content looks like today”. The exact language of the new policy has yet to be released. The company later clarified that the policy isn't exactly a crackdown on channels, like those that primarily post videos reacting to and providing commentary on other media, or entirely "faceless" channels. These won't be penalized by the update if they already qualify for monetization. "This is a minor update to YouTube's longstanding YPP policies to help better identify when content is mass produced or repetitive. This type of content has already been ineligible for monetization for years, and is content viewers often consider spam," said Rene Ritchie, head of editorial for YouTube and YouTubeInsider.
2. Reports, Briefs and Opinion Pieces:
“Entity-Based Regulation in Frontier AI Governance”, by the Carnegie Endowment for International Peace.
Dean W. Ball co-authored this piece before joining the U.S. Office of Science and Technology Policy. “We are fairly confident that entity-based regulation should play a significant role in frontier AI governance. The scope, design, and implementation of entity-based frontier AI regulation warrant careful consideration”, authors conclude.
“Proposed Frontier Model Transparency Framework”, by Anthropic.
“Frontier AI development needs greater transparency to ensure public safety and accountability for the companies developing this powerful technology. AI is advancing rapidly. While industry, governments, academia, and others work to develop agreed-upon safety standards and comprehensive evaluation methods—a process that could take months to years—we need interim steps to ensure that very powerful AI is developed securely, responsibly, and transparently”, the company argues.
3. Events
13th World Peace Forum (Jul 2-4th, Beijing, China).
On July 4, 2025, the International Committee of the Red Cross (ICRC) and the Institute of International Relations at Tsinghua University (TUIIR) co-hosted a panel on “Challenges and Opportunities of the Use of AI in Armed Conflict” as part of the 13th World Peace Forum in Beijing. Experts and scholars from China, Europe and the United States, as well as representatives from humanitarian organizations and government institutions, engaged in in-depth discussions on the transformative impacts of AI on modern warfare and international security, its ethical and humanitarian implications, and the urgent need for effective global legal regulations.
Thanks for reading, please share any comments and see you next week.