Issue #6 of the AI Safety & Governance Newsletter
U.S. Senate AI roadmap, South Africa's AI hubs, U.S.-China AI collaboration, Colorado's AI law, Sony Music's AI restrictions, AI governance in Africa, implications of AI taking over basic life skills.
Consumers in many African countries are not lagging behind in adopting new AI technologies. Charles Asiegbu and Chinasa T. Okolo for Brookings write that existing data protection regulations can be integrated into the AI governance frameworks that African countries seek to develop (link).
In a new report titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate,” the Bipartisan Senate AI Working Group, led by U.S. Senator Chuck Schumer, recommends at least $32 billion per year for AI innovation. The report is a culmination of a series of AI “insight” forums involving AI experts from industry and academia (link).
In addition to funding to enable cross-agency R&D, semiconductor R&D, AI Grand Challenges, and the National AI Research Resource, the report emphasizes the need for workforce up-skilling and improved immigration for STEM talent while ensuring AI systems comply with existing laws, especially in consumer protection and civil rights.
Critics argue that the report prioritizes industry and defense interests over public benefit, civil rights, and corporate accountability, lacking concrete measures to address AI's potential harms, such as bias and discrimination. See Tech Policy article.
While the report advocates for a risk-based approach to AI, comprehensive data privacy laws, and measures to manage national security threats from AI, including export controls and safeguards against bioweapons, it is not legislation and thus not a concrete regulatory intervention. Personally, I found the written statements from the expert briefings to be the most informative.
Kate Bartlett writes for VOA News about South Africa’s investment in AI expertise through the establishment of sector-focused AI hubs. More broadly, the article highlights the increasing interest of the U.S. and China in investing in AI on the African continent (link).
An emerging theme of U.S.-China collaboration on AI is capacity development for developing countries. Xue Lan, director of Tsinghua University's Institute for AI International Governance, recently said, “Another potential area that the U.S. and China can work together on AI is how to increase, to improve the capacity of developing countries in accessing and using AI technologies.”
The UK AI Safety Institute released a new tool for evaluating large language models. I’ve written about the AI Safety Institute in previous newsletters, and the newly published tool, Inspect, is designed to enhance safety evaluations by providing a standardized, open-source platform for assessing AI models' capabilities and security (link).
The Council of Europe has been proactively addressing the challenges posed by advancements in AI, particularly concerning human rights, democracy, and the rule of law. This effort culminated in forming the Committee on Artificial Intelligence (CAI) to develop a Framework Convention to ensure AI activities comply with these fundamental principles.
The resulting Framework Convention is heralded as the “first-ever international legally binding treaty” aimed at ensuring the respect for human rights, the rule of law, and democracy standards in the use of AI systems. The treaty was recently adopted in Strasbourg and aligns with existing international and domestic legal obligations in Europe, aiming to mitigate risks such as discrimination, undermining democratic processes, and violations of human dignity. See the press release for more details.
Helen Toner and Tasha McCauley, former board members of OpenAI, in The Economist, argue that private companies cannot be expected to align their profit motives with the public good when developing technologies like AI, necessitating effective governmental regulation (link). They explain how OpenAI's experiment in self-governance, despite its innovative structure, ultimately failed to uphold its public-interest mission, highlighting the need for external oversight.
Benjamin Cedric Larsen and Sabrina Küspert for Brookings compare and contrast the EU and U.S. approaches to regulating AI (link). They argue that the EU's approach has been more comprehensive and impactful. For example, the EU AI Act imposes legally binding regulations on providers of general-purpose AI within the EU, whereas the U.S. executive order primarily sets guidelines for federal agencies and imposes limited regulations on private entities. They claim that the voluntary nature of such guidelines aligns with the U.S.'s preceding preference for industry self-governance. However, as international efforts come into focus, cooperation between the EU and the U.S., together with other G7 countries, could serve as a model for aligning international efforts.
A newly enacted law in Colorado mandates that developers and deployers of high-risk AI systems use reasonable care to avoid algorithmic discrimination, with a rebuttable presumption of compliance if they meet specific requirements. Developers must disclose information about their AI systems, provide documentation for impact assessments, publicly summarize risks, and inform the attorney general and deployers of any risks within 90 days of discovery (link, link).
Sony Music in a recent statement prohibited unauthorized text or data mining, web scraping, or similar uses of their content for AI training or commercialization (link).
At the heart of the current wave of innovation in generative AI is massive amounts of data collected from the web. In a recent article for the NY Times, Cade Metz discusses how companies like OpenAI, Google, and Meta have scoured every corner of the internet for reputable English-language text to train their AI models. As these companies face a growing data shortage, they resort to various methods to gather more data, including transcribing YouTube videos and exploring the use of synthetic data. Metz highlights the legal and ethical dilemmas many AI companies encounter as they push the boundaries of data usage and copyright law to stay ahead in the AI race (link).
Similarly, obtaining high-quality training data for AI models in non-English languages is incredibly challenging. A recent discovery following the release of a new model from OpenAI revealed that the model’s Chinese token library was polluted with phrases from spam websites. Zeyi Yang discussed this in more detail in MIT Tech Review.
U.S. Secretary of Commerce Gina Raimondo recently announced plans to launch a global scientific network for AI safety, as well as establish the U.S. AI Safety Institute (AISI). According to the vision statement,
AISI “will enable more rigorous assessment of AI risk; more effective internal and external safeguards for AI models, systems, and agents; greater public confidence; and ultimately wider and more responsible development and adoption of AI.”
Technology companies do not seem to be shying away from military applications of AI. Following the inaugural “AI Expo for National Competitiveness,” hosted by the think tank created by former Google CEO Eric Schmidt, Caroline Haskins writes for The Guardian about how tech companies are discussing the applications of AI in military settings (link).
The big AI risk, according to Tyler Austin Harper, isn't the sci-fi scenario of robots becoming smarter than us and taking over. It is something closer to home and happening right now: the way we're letting AI take over basic life skills. Harper points out that while AI tools like dating apps, meal-kit services, and personalized recommendations are sold to us as ways to make life easier, they actually make us more dependent on algorithms for things we used to do ourselves—like socializing, thinking, and creating. This, he says, could lead to a future where we struggle to have normal human interactions without relying on AI, which would fundamentally change what it means to be human. Link to article on The Atlantic.
In an insightful interview, Linda Moore, CEO of TechNet—a tech lobbying organization—highlights the importance of educating both the public and policymakers on AI while ensuring that new regulations do not stifle innovation or drive businesses away (link).
In a powerful personal account, former content moderator for Facebook and Instagram in Kenya, Jatani Hussein, shares the traumatic experiences of reviewing thousands of graphic and violent posts daily. Employed through outsourcing companies, Jatani and his fellow moderators faced severe mental health challenges while scrubbing harmful content off the internet. Last week, Kenyan President William Ruto visited the White House. Jatani urges consideration of the risks faced by content moderators—a role that continues to be in demand as more companies try to ensure their AI models are used safely.
Thanks for reading!