🤖 AI Policy this week #28. California AI Bill approved and awaiting to become Law; Brazil’s DPA greenlights data use by Meta.
A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.
1. News
Brazil’s DPA: Meta complies with requirements and will be able to resume, with restrictions, the use of personal data to train AI.
The National Data Protection Authority (ANPD) suspended the ban imposed on Meta using personal data to train its artificial intelligence. It comes as part of an appeal submitted by Meta, based on documentation presented by the company and commitments it has made. In its new decision, the Board of Directors approved a Compliance Plan, which includes various measures to be implemented by the company with a view to adapting its practices.
California Legislature Approves AI Bill and it goes to Gov. Newsom desk.
Following a 32-1 vote in the Senate in May, the Assembly voted 48-15 to pass the bill, and the Senate voted to concur with amendments. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols. The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. Elon Musk had previously joined the debate backing the Bill.
OpenAI supports another California AI bill requiring 'watermarking' of synthetic content.
AB 3211 has already passed the first two rounds of vote approvals, and, as with the other bill, now must be considered by the state Senate by the end of this month and then signed into law by Governor Gavin Newsom by the end of September. "New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content," OpenAI Chief Strategy Officer Jason Kwon wrote in a letter sent to the bill's author, according to Reuters.
OpenAI, Anthropic enter agreements with US NIST’s AI Safety Institute.
The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI. Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.
The Transparency Council of Chile publishes Recommendations on Algorithmic Transparency for the public sector.
They aim to guide and promote the adoption of good practices regarding transparency and publicity of automated and semi-automated decision systems (SDA) used in the public sector, both from the respect for the exercise of the right of access to public information, as well as the proactive publication of relevant information on such systems. The Recommendations consider as SDA any technology, system or process that, through a computer system, helps, assists, supports or replaces the decision making process by an individual working in a public entity, which is then translated into an administrative act, or into an act with legal effects in the event that the entity does not dictate administrative acts itself.
US Democratic Lawmakers call for crackdown on AI deepfakes after Grok backlash. A group of Democratic lawmakers are pushing the Federal Election Commission (FEC) to increase regulation on artificial intelligence (AI) deepfakes following the release of the social platform X’s chatbot Grok. In a letter to the FEC, Rep. Shontel Brown (D-Ohio) and a handful of other House members asked the regulatory agency to clarify whether AI-generated deepfakes of election candidates are classified as “fraudulent misrepresentation.”
Indian Policy on AI infrastructure likely after GAIS summit: Minister Sridhar Babu.
IT Minister D Sridhar Babu said that the state government is likely to formulate a policy on AI infrastructure after the completion of the Global AI Summit (GAIS) to be held on September 5 and 6. “After the conclusion of GAIS, we will enter a new journey where the focus will be on what sort of architecture is required to make Hyderabad a global AI capital. Many key companies have shown interest to come and partner with us”, he stressed.
Banks and accounting firms should brace for cost of AI job losses, UK unions warn.
Banks, insurers and accounting firms should be braced to pay for the retraining of millions of employees whose jobs could be displaced by artificial intelligence, UK unions will warn at the Trades Union Congress next month. Accord, which represents banking workers, will call on financial services groups to prepare to fund a “major” programme to reskill many of their almost 2.5mn UK staff in a motion to the labour movement’s annual conference, as reported by FT.
2. Reports, Briefs and Opinion Pieces:
“Bridging the AI Governance Divide” by New America.
“A new paper from New America and the Igarapé Institute examines the global gap in responsible artificial intelligence (AI) frameworks and the risks that emerge when AI policies and practices developed primarily for the Global North are exported to the Global South, where socioeconomic context is different and regulation and infrastructure tend to be less advanced”.
“Generative AI in education: user research and technical report”, by the UK's Government.
Reports on the insights from teachers, leaders and pupils on the potential uses of generative artificial intelligence in education.
“Regulatory Risks and Challenges of Adopting AI in the Telecom Sector”, by OMDIA.
The report points to ‘critical areas’ it believes telcos must focus on with regards to AI regulation, which include rules surrounding high risk situations and prohibited use, as well as transparency requirements, and enforcement.
3. Events:
Derechos humanos y gobernanza de la inteligencia artificial en Latinoamérica (Aug 28, online).
At ILDA's AI Week, the panel addressed responsible AI and human rights, presented findings of the Global Responsible AI Index, highlighting the need for ethical use of data and the value of open data, and discussed the gaps in the protection of human rights, gender and diversity.
Foro de Innovación e Inteligencia Artificial de Uruguay (Aug 29, Montevideo, Uruguay).
The Uruguay Innovation and AI Forum convened policymakers, private sector, academics and civil society representatives to discuss the opportunities and challenges that innovation and AI present for Uruguay and the region.
Coming next: Council of Europe AI Treaty Open For Signature - September 5.