🤖 AI Policy this week #049. The Oscars address Gen-AI use; BRICS adopts declaration on AI and employment.
A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.
Gen-AI remains a major area of debate, as shown in a report by the EU examining 33 policy documents across 17 Member States, EU institutions and regions. In this sense, the Academy of Motion Picture Arts and Sciences for the first time covered the issue for the next Oscars, saying using the technology wouldn’t disqualify a movie but that it would favor films with more human involvement. Let's see what the nominees will be! The international landscape is also moving, with a BRICS declaration on AI and Employment, and the UN and OECD signing (again) an MoU on the field.
1. News
BRICS Adopts Declaration on AI, Climate Change, and Employment at 2025 Labour Ministers’ Meeting.
The 11th BRICS Labour & Employment Ministers' Meeting, held on April 25, 2025, in Brasília under Brazil’s leadership, focused on two critical themes: Artificial Intelligence (AI) and its effect on employment, and the impact of climate change on the workforce. Key Outcomes of the BRICS Declaration:
* Inclusive AI Policies: The declaration commits to promoting AI policies that ensure a balance between innovation and worker protection.
* Social Dialogue for Fair Transitions: Advancing dialogue to ensure that climate transitions are fair to all workers, particularly those in vulnerable sectors.
* Strengthening South-South Cooperation: Enhancing cooperation among BRICS countries on labour governance, digital inclusion, and the creation of green jobs.
The Academy of Motion Picture Arts and Sciences Releases Extensive Rule Changes for the Next Oscars, Including Guidance on AI.
For the first time, the academy addressed the use of generative artificial intelligence, a technology sweeping into the film capital yet hugely divisive in the industry’s creative ranks. A.I. and other digital tools “neither help nor harm the chances of achieving a nomination,” the Oscar rules now state. The academy added, however, that the more a human played a role in a film’s creation, the better. (“The academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award”). The academy had been considering whether to change its submission process to make it mandatory that A.I. use be disclosed. But it decided not to go that far.
Dutch privacy regulator warns against use of Meta AI.
The Dutch privacy watchdog Autoriteit Persoonsgegevens (AP) said it's “very concerned” about the plans of Meta and other large platforms to train their tools with user data. “It is not yet a done deal whether Meta is allowed to do what the company plans to do," the statement said, adding: "Among other things, it is questionable whether Meta's opt-out model meets the legal requirements. The AP and the other European supervisors are in close consultation with the Irish supervisor about this.”
The UN and OECD sign an MoU to enhance cooperation on AI.
The UN, represented by the Office for Digital and Emerging Technologies (UDET), and the OECD signed an MoU to enhance cooperation on AI. “The memorandum strengthens collaboration on evidence-based approaches to harness AI’s economic & societal benefits—building on the #GlobalDigitalCompact, OECD AI Principles & @GPAI_PMIA”, they informed.
US White House Releases AI Education Executive Order.
As anticipated last week here, President Donald Trump signed an Executive Order on Advancing Artificial Intelligence Education for American Youth. The new EO creates an interagency task force on AI and education, proposes an AI challenge for students, supports AI training for educators, and prioritizes apprenticeships in AI-related occupations. “AI in education is more than just a student hack to write a quick essay. It’s a tool that’s reshaping the American economy and that students will need to become proficient in,” said ARI President Brad Carson.
US GAO suggests policy reform to mitigate generative AI’s human, environmental risks.
The Government Accountability Office in a new report pointed to the human and environmental concerns that generative artificial intelligence introduces, and how the government might handle them. The federal watchdog said that Congress, agencies and industry could encourage AI developers to use government frameworks, like those that come from GAO or the National Institute of Standards and Technology, to defend against harmful AI-generated content that compromises safety, security and privacy.
Austin City Council will consider a substantial expansion of the city’s artificial intelligence oversight.
The resolution builds on a policy passed in February 2024, but goes further by requiring audits, defining acceptable AI uses, and mandating human oversight of AI decisions affecting city employees and operations. The proposed resolution directs the city manager to conduct a regional environmental study, in partnership with Austin Energy and Austin Water, focused on the anticipated growth of data centers over the next decade. The resolution, which is sponsored by Mayor Pro Tem Vanessa Fuentes, also sets guidelines to prohibit the use of AI by the city in areas such as real-time employee surveillance, biometric data collection, and automated decisions in policing or personnel matters. It includes language that creates a “no displacement without consultation” labor policy, requiring prior notice and dialogue with union representatives if AI deployment is expected to eliminate or alter city jobs.
Bill to Create Texas AI Council, Strengthen Regulations Passes House.
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) passed this week on the House floor, aimed at setting up a framework to balance artificial intelligence (AI) regulation and industry expansion in the state.. “House Bill 149 will create a comprehensive framework around AI that will allow for clear legal pathways and protections for consumers that are harmed,” the bill author, Rep. Giovanni Capriglione, explained.
2. Reports, Briefs and Opinion Pieces:
“AI Agent Governance: A Field Guide”, by the Institute for AI Policy and Strategy (IAPS).
By researchers Jam Kraprayoon, Zoe Williams, and Rida Fayyaz.
“Highlights from the report include:
What’s coming: Vivid scenarios show what life with millions of AI agents could look like.
The pace of change: Today’s AI agents struggle with tasks over an hour, but that limit has been doubling every few months.
The governance gap: We map out the biggest unsolved challenges and introduce a new framework for understanding agent governance solutions”.
“Analysis of the generative AI landscape in the European public sector”, by European Commission’s Public Sector Tech Watch (PSTW).
The report provides the first perspective, at EU level, on the governance, implementation, and trials of generative AI (GenAI) technologies across public administrations in Europe. “The report examines 33 GenAI-related policy documents across 17 EU Member States, EU institutions and regions and identifies a clear priority on ensuring accountability, transparency, and personal data privacy as well as promoting innovation and public-private collaboration”.
“Comments Received in Response To: Request for Information on the Development of an Artificial Intelligence (AI) Action Plan (“Plan”)”, by the US Office of Science and Technology Policy (OSTP).
The Office of Science and Technology Policy (OSTP) via the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO) and the National Science Foundation, published a Request for Information (RFI) on February 6, 2025, to obtain public input from all interested parties on the Development of an Artificial Intelligence (AI) Action Plan (“Action Plan”). This page includes 10,068 responses that were responsive to this RFI.
3. Events:
Dubai AI Week (Apr 21-25th, Dubai, UAE).
During the Dubai AI Week, the emirate's new AI policy for government entities, and the first edition of the Dubai State of AI Report were unveiled. The report highlights how AI is already transforming public services in areas like healthcare, urban planning and emergency response — with over 100 real-world use cases either active or in development.
AI+Policy Symposium: A Global Stocktaking (Apr 16th, Stanford, US).
“As the landscape evolves rapidly, this symposium, co-hosted by the Stanford Institute for Human-Centered AI (HAI) and the Stanford Cyber Policy Center, intended to help participants navigate the escalating complexity and make sense of the multifaceted, worldwide policy ecosystem directing society's adoption of AI systems”.
Thanks for reading, please share any comments and see you next week.