Issue #3 of the AI Safety & Governance Newsletter
Policy frameworks for governing AI in Africa, AI licensing and accountability structures, requirements for U.S. agencies to prevent discriminatory use of AI, Canadian AI Safety Institute.
Welcome to the 3rd issue of the Artificial Intelligence (AI) Safety & Governance Newsletter. Thank you for being a subscriber!
Key Reads
Nigeria’s Ministry of Communications, Innovation and Digital Economy, is bringing together 120 AI researchers to co-create a national AI strategy (link). Mauritius was the first country in Africa to publish a National AI Strategy in 2018, and Rwanda and Egypt have followed suit. Several other countries, including Ghana and Tunisia, are in various stages of the process.
Microsoft recently published a report on “policy frameworks” for governing AI in Africa, which highlights several of these efforts and proposes a legal and regulatory blueprint for AI governance on the continent. Key themes of the proposal include: the need to define a class of high-risk AI systems based on their capability, autonomy, and potential for large-scale harm; the role of governments in mandating mechanisms in such systems designed to detect, avoid, and disengage from unintended consequences; and the proposal that such AI systems should only be deployed on licensed AI infrastructure, thereby offering an additional layer of human oversight (link).
Most generative AI models are trained on text, image, and video data aggregated from the web. Christo Buschek and Jer Thorp investigate the curation and labeling process of LAION-5B, one of the largest and most commonly used datasets, and reveal several critical issues. The presence of illegal material in LAION-5B highlights the legal and ethical challenges in managing large datasets. Due to its sheer size, human curation is impractical, leading to reliance on automated processes that often introduce biases. These biases can be amplified when algorithmic data filters, as is commonly the case, are used (link).
The European Parliament released a briefing titled “What Think Tanks are Thinking” that references several position articles on AI and AI governance from a broad range of think tanks (link). Here are a couple of them I found interesting:
Licensing AI is not the answer—but it contains the answers (Tom Wheeler, Brookings Institution): While licensing AI efforts above a certain scale of capabilities may address some concerns, it is insufficient for effective oversight; instead, establishing standards and a new federal agency to oversee and enforce these standards is essential for governing AI in the public interest, ensuring safety, and fostering innovation and competition across all scales of AI development (link).
The AI Election Year How to Counter the Impact of Artificial Intelligence (Katja Muñoz, German Council on Foreign Relations): The strategic manipulation of information through generative AI poses a significant threat to democracy, particularly in an “AI election year” like 2024 with over 70 elections worldwide. This necessitates an assertive defense strategy and collective efforts from tech companies (e.g via content watermarking), social media platforms (e.g via better identity verification), Germany, the EU, and NATO to implement solutions, legislative measures, and tactics to combat these threats (link).
The U.S. government is implementing new, binding requirements for U.S. agencies to prevent discriminatory use of AI, including measures such as allowing travelers to refuse facial recognition scans at airports, establishing transparency and oversight for AI tools used by the government, and aiming to influence the AI industry through federal procurement policies (link).
The U.S. National Telecommunications and Information Administration (NTIA) published a report and recommendations on AI Accountability Policy (link). The report addresses several topics regarding the safety and governance of AI. I’ll highlight a key section on AI evaluation, edited for clarity:
The use of audits within AI accountability structures should depend on the risk level, deployment sector, maturity of evaluation methodologies, and available resources. Audits are appropriate for high-risk applications or models, at least for validating claims about system performance, limitations, and governance controls. Audits should employ replicable, standardized, and transparent methods. We recommend that audits be required for high-risk AI systems and applications, with government support for an independent evaluation ecosystem. Audits should incorporate applicable standards recognized by federal agencies. Designating AI as high-risk outside of specific contexts is challenging; however, the Office of Management and Budget has provided guidance on rights-impacting and safety-impacting AI systems, with exemptions as necessary.
The Utah Senate Bill 149, known as the Artificial Intelligence Policy Act, has been signed into law. The new law introduces regulations on the use of generative AI in commercial communications and healthcare, requiring clear disclosures about AI use, establishing fines for violations, and creating a state Office of Artificial Intelligence Policy to oversee AI practices and innovation (link).
In The Economist, Madeleine I.G. Daepp and Robert Osazuwa Ness discuss how generative AI is being used to spread targeted disinformation in Taiwan, highlighting how fact-checkers and government agencies are struggling to keep up, and the need to develop new strategies for monitoring and countering disinformation at scale (link).
Most AI researchers and companies agree that the government should not impose limitations on open-source AI, including blocking access to the weights of large-scale models. The NTIA received over 300 comments from the public, which will help inform a report to the President containing the NTIA’s findings and policy recommendations (link). Meta, OpenAI, and a group of researchers affiliated with the Institute for Advanced Study in Princeton have published their comments.
The Emerging Technology Observatory, a project of the Center for Security and Emerging Technology at Georgetown University published an analysis that shows that AI safety research constitutes 2% of all AI research. Despite its smaller proportion, AI safety research is highly cited indicating its impact within the broader AI research community (link).
The U.S. and UK recently signed a Memorandum of Understanding to collaborate closely on AI safety research and testing, leveraging shared scientific approaches and expert exchanges to enhance global AI safety standards and address emerging risks (link).
Achieving global governance of AI is complicated due to differences in countries' approaches to development, safety, and application, including in military contexts. Despite these challenges, Xue Lan, Chinese public policy expert, Professor and the Dean of Schwarzman College at Tsinghua University advocates for inclusive dialogue, technology sharing to bridge the intelligence gap, and the formation of globally inclusive governance mechanisms to ensure the ethical and safe advancement of AI technology (link; article needs to be translated from Chinese).
Content moderation involves screening and monitoring user-generated content on online platforms to ensure it adheres to the platform's rules, guidelines or legal requirements. Alyssa Boicel discusses in Tech Policy Press the challenges and limitations of using large language models (LLMs) for content moderation, particularly in the context of their inability to fully grasp cultural nuances, evolving language, and the reappropriation of derogatory terms (link).
U.S. Congress members, led by Co-Chair of the House AI Caucus, Anna G. Eshoo, and Congressman Neal Dunn, have introduced the Protecting Consumers from Deceptive AI Act to combat the challenges posed by AI-generated deepfakes and other deceptive content. The legislation aims to develop standards for identifying and labeling AI-generated content, requiring disclosures from AI developers and online platforms (link).
The Canadian government plans to allocate $2.4 billion to enhance Canada's AI capabilities, and maintain its competitive edge globally. Key initiatives include investing in computing capabilities and technological infrastructure, supporting AI startups, aiding small and medium-sized businesses in integrating AI, and allocating funds to train workers in industries that could be potentially disrupted by AI. Additionally, Canada plans to establish a Canadian AI Safety Institute (link).
The use of automated systems, including AI, by private and public entities can potentially result in unlawful discrimination and violate federal laws, given issues like biased datasets, lack of transparency, and incorrect assumptions in their development. Various U.S. federal agencies, including the CFPB, DOJ, EEOC, FTC, and others, in a recent statement, assert their commitment to enforcing civil rights, non-discrimination, and consumer protection laws, emphasizing that these laws apply to automated systems just as they do to traditional practices (link).
Opportunities
USAID is requesting comments as it seeks to develop a global AI research agenda. The key questions focus on identifying effective practices, guidelines, and frameworks to ensure that AI research is conducted safely, ethically, and with consideration for global contexts (link).