Quantcast
Channel: Society – Communications of the ACM
Viewing all articles
Browse latest Browse all 20

That Was The Week That Was

$
0
0

It is conventional wisdom that technology typically outpaces the law. But the last week of October in the year 2023 may be remembered as the week when lawmakers made a real effort to outpace technology. To be sure, there were a lot of high-level meetings and a lot of announcements.

The White House Executive Order on Safe, Secure, and Trustworthy AI.[1] President Biden kicked off the week with the release of a long-awaited executive order on Artificial Intelligence (AI). The Executive Order reflected a whole-of-government approach and is the outcome of White House consultations with tech CEOs, academic experts, civil society leaders, labor organizers, and agency heads. The Executive Order is broad and ambitious, but also vague on specific obligations. Less guardrails and more lane markers, the EO seeks to organize the role of the federal agencies as the U.S. government confronts the challenges and opportunities of AI. There are eight guiding principles, but these goals are less clearly stated than in earlier executive orders. The top priority is now "safe and secure" AI, which will require testing, standards, and labelling. Much of the EO intends to make clear the current authorities of the federal government to promote competition, protect civil rights, train workers, and advance cybersecurity. There was a nice mention of Privacy Enhancing Technologies that would limit or eliminate the collection of personal data. (President Biden received applause when he coupled these concepts for children's privacy at the White House release of the Order.)

Federal agencies will be tasked with preparing reports to identify AI use and to mitigate risk. The Commerce Department will have a leading role for many of the regulations that are likely to follow, including the creation of an AI Safety Institute. The President invoked the Defense Production Act to allow the Commerce Department to regulate dual-use foundational models in the private sector. The reporting requirements will cover "any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations," with a lower threshold for models using primary biological sequence data. NIST has a lot of work ahead.

Developing capabilities for identifying and labelling synthetic content produced by AI systems may be among the perplexing mandates of the Executive Order. Will the aim be to label synthetic content as warning or to authenticate non-synthetic content as reliable?

 

OMB Guidance on Agency Use of AI.[2] The Executive Order on AI was followed later in the week by the OMB's proposed memorandum on "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence." The OMB Guidance provides specific direction to federal agencies. The OMB Guidance will establish Chief AI Officers and AI Governance Bodies across the federal government. Agencies will be expected to develop Compliance Plans and enhance Use Case Inventories. Agencies are also expected to identify and follow minimum practices for "rights-impacting" and "safety-impacting" systems that will require AI impact assessments and independent evaluation. AI systems that do not comply with the minimum practices could be shut down in August 2024. Still unclear is how minimum practices will be determined, when waivers will be granted, or what rights individuals will have when subject to unfair AI-based decisions. When Congress set out to regulate computers in the federal government, there were extensive obligations for deployment and specific rights for individuals. Those with opinions about how the OMB should proceed with AI systems are invited to submit comments to OMB before December 5, 2023.[3]

 

The Vice President's Speech on the Future of Artificial Intelligence.[4] Vice president Kamala Harris took the occasion of the U.K. AI Safety Summit to deliver a provocative speech on existential risk at the U.S. embassy in London. Eschewing the paper clip scenario of the p(doom) crowd, Harris asked pointedly if someone who loses benefits because of a faulty algorithm or is arrested because of biased facial identification does not experience existential impact. She pointed also to how deep fakes and misinformation could be existential threats to public discourse and democratic institutions. Her plea was straightforward: "To address AI safety, we must consider the full spectrum of AI risk." Ahead of the trip, almost 20 members of Congress led by Rep. Sara Jacobs (D-CA) urged the Vice President to underline the fairness agenda in her U.K. remarks.[5]

 

The Bletchley Park Declaration.[6] The U.K. government, which once arrested Alan Turing for homosexuality but now celebrates him on the 50-pound note, turned again to its codebreaking history by using the fabled Bletchley Park to host the AI Safety Summit. Prime Minister Sunak had earlier been criticized for his singular focus on AI safety and the interests of the tech industry, but made progress on both fronts with a broader agenda and greater inclusion than originally anticipated.[7] A series of "fringe" events engaged civil society representatives and academic experts, alongside tech leaders and government representatives. Notable ACM members including Yoshua Bengio, Dame Wendy Hall, and Stuart Russell were in attendance. Twenty-eight countries and the EU endorsed the closing declaration, which hit all the notes of AI governance—"human-centric, trustworthy, and responsible"— though left open hard questions about next steps. The Declaration acknowledged immediate AI risks in the "domains of daily life," as well as manipulations and deceptive content. The U.K. AI Safety Summit prioritized highly capable general-purpose AI models. Acknowledging the dangers of intentional use and accidental misuse, the signatories highlighted the risks in cybersecurity, biotechnology and misinformation. There was a strong call for continued international cooperation, as well as a message to those on the front lines who "have a particularly strong responsibility for ensuring the safety of these AI systems." The signatories of the Bletchley declaration are notable and included China and the Kingdom of Saudi Arabia. And there are already follow-up meetings planned next year in Paris and Korea.

 

U.S.-China Dialogue. In the lead-up to the U.K. Summit, there was speculation as to whether the British Prime Minister would invite China to join the AI dialogue with "like-minded nations." Sunak made the right call and brought China into the room. U.S. Secretary of Commerce Gina Raimondo and Chinese Vice Minister of Science and Technology Wu Zhaohui shared a stage at the opening plenary of the U.K. AI Safety Summit. There was a shared recognition among the world's AI superpowers[8] about the need to find "global solutions to global problems."[9] Dialogue, as the diplomats often say, is better than the alternative.

 

G7 Principles on Advanced AI.[10] Perhaps it is not surprising that the international organization that launched the first framework for the governance of AI this week also announced new principles for Advanced AI Systems. The principles emphasize monitoring throughout the AI lifecycle, as well as public reporting, security, authentication, transparency, and risk mitigation. Notable in the G7 statement are clear prohibitions. The G7 said that organizations should not develop or deploy AI systems that "undermine democratic values, are particularly harmful to individuals or communities, facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights." A related G7 Code of Conduct, based on the Principles, provide a few more details on implementation.[11]

 

U.N. Makes Progress on Autonomous Weapons.[12] More than 40 years ago, computer scientists confronted a real existential risk of AI—launch-on-warning systems that could propel the world into nuclear war. Last week, the U.N. signaled progress on this concern, after a key committee adopted a resolution on lethal autonomous weapons systems. "An algorithm must not be in full control of decisions that involve killing or harming humans," Egypt's representative said after voting in favor of the resolution. "The principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapons system involved," he added.

 

And now for the quick assessment of The Week That Was.

 

History Vanishing? Those writing the declarations and executive orders over the past week somehow failed to mention the AI policy frameworks that their countries had previously endorsed. The U.S. and U.K., hosts of the week's big AI events, were among the early supporters of the OECD AI Principles. Almost all the signatories of the Bletchley Declaration also backed the G20 AI Guidelines. And just about every country in the world has endorsed the UNESCO Recommendation on AI Ethics. In the U.S., the failure to carry forward important principles from earlier executive order, such as "traceable," is also notable. To be sure new issues have emerged, but unless AI policymakers carry forward earlier efforts, they end up moving sideways instead of ahead. And the shift away from foundational principles for fundamental rights protection in the U.S. is concerning.

 

The AI Safety Agenda displaces the AI Fairness Agenda. "Safe, Secure, and Trustworthy AI" was the banner of the week, emblazoned in the Executive Order, the G7 Guidelines, and the U.K. AI Safety Summit. But it is also possible to imagine a Presidential Order titled "Fair, Transparent, and Accountable AI." The recent headlines reflected the influence of the tech CEOs and AI experts who had elevated White House concerns about existential risk over the last several months. Many lawmakers and civil society organizations urged clearer support for the AI Bill of Rights that animated the Biden administration in the early days. It was disappointing that the word "fairness" cannot be found in the 36-page Executive order.[13] "Traceable," a key goal of the 2020 Presidential Executive Order, was also missing.[14]

 

Principles Are Important, But. Former OECD Director Andy Wykoff observed that "Principles are important, but ensuring implementation and compliance is essential and must be the priority." As Wykoff noted, the G7 is influential but hardly global. A logical next step would be for the OECD to carry forward the five-year review of the original AI Principles in 2024 in coordination with the G20 under the Brazilian presidency and also in cooperation with the U.N., which will soon release recommendations for the global governance of AI.

 

Congress Needs to Act. Credit those who pulled together the events in the U.S. and the U.K. Rarely have governments moved so quickly to address new challenges, outside of wartime. But a close examination of the events of the past week leaves many questions unanswered. And in the rush to focus on existential risk, policymakers may have unwittingly undone progress made over the last several years to move forward effective frameworks for AI governance.

 

Marc Rotenberg is founder and executive director of the Center for AI and Digital Policy.

 

[1] The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Oct. 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[2] Federal Register, OMB, Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum, Nov. 3, 2023,

https://www.federalregister.gov/documents/2023/11/03/2023-24269/request-for-comments-on-advancing-governance-innovation-and-risk-management-for-agency-use-of

[3] CAIDP, Public Voice, OMB Request for Comment – Memorandum on AI Governance, Innovation, and Management, https://www.caidp.org/public-voice/omb-us-2023/

[4] Remarks by Vice President Harris on the Future of Artificial Intelligence | London, United Kingdom, Nov. 2, 2023, https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom/

[5] Letter from Rep. Sara Jacobs, Sen. Edward Market and 17 other Members of Congress to VP Kamala Harris, Oct 31, 2023, andhttps://sarajacobs.house.gov/uploadedfiles/letter_to_vice_president_harris_regarding_the_uk_ai_safety_summit.pdf

[6] AI Safety Summit, The Bletchley Declaration by Countries Attending the AI Safety Summit, Nov. 1, 2023 https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

[7] Merve Hickok and Marc Rotenberg, The UK AI Summit: Time to Elevate Democratic Values, Council on Foreign Relations, Sept. 27, 2023, https://www.cfr.org/blog/uk-ai-summit-time-elevate-democratic-values

[8] Kai-Fu Lee, AI Superpowers: China, Silicon Valley and the New World Order (2018)

[9] Dept. of Commerce, Remarks by Commerce Secretary Gina Raimondo at the AI Safety Summit 2023 in Bletchley, England, Nov. 2, 2023, https://www.commerce.gov/news/speeches/2023/11/remarks-commerce-secretary-gina-raimondo-ai-safety-summit-2023-bletchley

[10] G7 Hiroshima Process, International Guiding Principles for Organizations Developing Advanced AI Systems, Oct. 30, 2023, https://www.mofa.go.jp/files/100573471.pdf

[11] G7 Hiroshima Process, International Code of Conduct for Organizations Developing Advanced AI System, Oct. 30, 2023, https://www.mofa.go.jp/files/100573473.pdf

[12] United Nations, First Committee Approves New Resolution on Lethal Autonomous Weapons, as Speaker Warns 'An Algorithm Must Not Be in Full Control of Decisions Involving Killing," Nov. 1, 2023, https://press.un.org/en/2023/gadis3731.doc.htm

[13] CAIDP's report AI and Democratic Values considers the inclusion of such terms as "fairness," "accountability," and "transparency" in national AI strategies to determine a country's alignment with democratic values. Marc Rotenberg, Time to Assess National AI Policies, Blog@CACM, Nov. 20, 2020, https://cacm.acm.org/blogs/blog-cacm/248921-time-to-assess-national-ai-policies/fulltext

[14] The White House, Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Dec. 3, 2020, https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/

The post That Was The Week That Was appeared first on Communications of the ACM - VIP Preprod.


Viewing all articles
Browse latest Browse all 20

Trending Articles