Welcome to the 2nd issue of the Artificial Intelligence (AI) Safety & Governance Newsletter. Thank you for being a subscriber!
AI is not a new technology, but it is increasingly becoming an important part of the way many people around the world live. This newsletter is a curation of content and news pertaining to how technologists, policymakers, and everyone in between are working towards ensuring the impacts of AI are beneficial.
Key Reads
All 193 members of the United Nations General Assembly have unanimously adopted a resolution aimed at governing artificial intelligence (AI), marking a global consensus for the development of safe, secure, and trustworthy AI systems (link). The resolution emphasizes the need for capacity building to close digital divides, discourages the use of AI to undermine peace or repress human rights, and calls for the responsible development and deployment of AI technologies. It highlights the mutual reinforcement of innovation and regulation, and the importance of education, transparency, and knowledge sharing on a voluntary basis.
Although non-binding, the resolution is seen as a step toward using AI to tackle existential threats and seize opportunities across various sectors, including healthcare, education, agriculture, and democracy promotion. It sets the stage for future collaboration within the UN and beyond. Representatives from Japan, Morocco, Singapore, and the UK, in a subsequent media stakeout (link), expressed their support and outlined their views on the resolution's impact.
While there is no consensus on the likelihood and severity of risks from near-future AI systems, the White House's executive order on AI, issued on October 30, requires companies developing “dual-use foundation models” of AI—which could pose significant safety and security risks—to report various related information to the government. In a recent article, Thomas Woodside of The Center for Security and Emerging Technology at Georgetown University argues for the importance of the U.S. Congress focusing on establishing an early warning system for severe risks posed by advanced AI systems as part of their legislative agenda on AI (link). He asserts that legislative action is necessary to solidify and enhance these provisions beyond the limitations of an executive order, which relies on the older Defense Production Act and may be rescinded by future administrations.
The cost of developing cutting-edge AI technology is increasingly becoming prohibitive for academics, as large-scale compute clusters demand substantial financial resources that typical research labs may not be able to cover with research grants. Consequently, academics are becoming more dependent on major technology companies, thus pushing AI researchers to orient their work towards commercial applications. Fei-Fei Li of Stanford advocates for the creation of a national AI repository to democratize access to computing resources and data (link). This initiative aims to counter the dominance of tech giants and preserve the independence of AI research. Conversely, Meredith Whittaker, President of Signal and Chief Advisor to The AI Now Institute, recently stated in a tweet that “...proposals to alleviate this imbalance largely exacerbate it, insofar as they amount to the government paying Big Tech to provide academics access to Big Tech resources” (link).
In a recent panel discussion, experts from The RAND Corporation were unified in recognizing the importance of independent research to understand AI's risks and implications for public policy (link). An analogy of AI as a slow-moving catastrophe, akin to climate change, was drawn, emphasizing incremental harms that could gradually erode the quality of life and essential societal structures. The discussion also touched upon existential threats, with varying opinions on whether AI poses an unrecoverable harm to humanity. While some panelists viewed AI as a magnifier of existing problems rather than an existential threat, others raised concerns about AI's potential to disrupt meaningful human activity and connections, positing a more profound existential risk. Policy solutions, such as transparency, oversight, and diversity in AI development, were suggested as means to mitigate these risks.
A recently introduced bill in the UK’s House of Lords seeks to establish a comprehensive regulatory framework for AI in the UK (link). Among many important stipulations, the bill mandates that individuals involved in AI training must document and report the use of third-party data and intellectual property (IP) to an authority, ensuring all data and IP are utilized with informed consent and in compliance with applicable laws. Copyright and IP issues are increasingly becoming contentious within the AI industry, exemplified by several high-profile lawsuits, such as a recent case involving book authors and the software company Databricks.
A new series of articles published by the AI Now Institute “survey the nationalist narratives around AI and emergent industrial policies being proposed by governments with differing economic and geopolitical motivations”. I’ve only read the first article and it was deeply insightful.
The first article, written by Susannah Glickman, delves into the deep historical intertwining of the tech industry, government intervention, and the development of AI and semiconductors within the United States. It highlights how government policies and funding have played pivotal roles in the evolution of technology, dispelling the myth of the lone tech entrepreneur and emphasizing the importance of state support and intervention (link).
The narrative begins in the mid-1980s, tracing the emergence of 'tech' as a category and its subsequent growth, which was fueled by government intervention. It showcases how the tech industry has been closely linked with national security, government priorities, and economic policy, illustrating this through various administrations' approaches to industrial policy, such as the CHIPS Act and executive orders focusing on semiconductor manufacturing. The history of AI is deeply intertwined with government funding and its strategic importance to national security. The development of AI is shown to be reliant on advances in semiconductor technology, which, in turn, benefits from massive government support and global cooperation.
Some key highlights include the Cold War's influence on tech development, the strategic importance placed on semiconductors and AI during different administrations, the impact of global competition (notably with Japan), and the shift towards a fabless model in semiconductor manufacturing. The article also discusses the recent re-engagement of government with the semiconductor industry in response to global challenges and supply chain issues, underscoring the ongoing significance of government involvement in sustaining the tech industry.
The narrative culminates in a discussion on the current state and future prospects of AI and semiconductors, emphasizing how intertwined these fields remain with governmental policies and the broader political economy. It asserts that the history of government intervention in tech is not just a series of isolated events but a fundamental aspect of how the tech industry has evolved and operates within the global economy.
The other articles in the series examine the policies and narratives in other countries, including South Africa, the UK, India, and the UAE (link).
Former IBM CEO Ginni Rometty expressed strong support for up-skilling in a recent podcast, discussing how AI can assist in job performance and provide opportunities for those with less formal education or credentials due to rapidly changing skills. In the same podcast, Dragoș Tudorache, a member of the European Parliament, explores the potential impact of AI on the workforce (link).
A recent investigation conducted by the AI Democracy Projects tested five leading AI text models (OpenAI's GPT-4, Anthropic's Claude, Google's Gemini, Meta's Llama 2, and Mistral's Mixtral) for their ability to provide accurate and reliable election-related information. Despite AI companies' pledges to promote information integrity, the study found that the AI models often delivered responses that were inaccurate, misleading, incomplete, or harmful, with all models struggling to correctly answer queries about election rules, such as the prohibition of campaign-related apparel at Texas polling places. Among the models, GPT-4 performed best in terms of accuracy, but the overall findings highlighted significant gaps in the models' utility for providing public election information (link).
Melissa Heikkilä wrote a comprehensive summary in MIT Technology Review (link) about how the development and use of AI within the EU will be influenced by the recently enacted EU AI Act.
The first global summit on deepfake sexual abuse, organized by My Image My Choice and partners, including Bumble, convened experts across fields to discuss the burgeoning issue of deepfake-generated pornography. Deepfakes are 'synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another.' This form of abuse, notably targeting women, has seen a dramatic increase in instances due to easily accessible AI tools. The experts called for stronger enforcement of terms of service by tech giants, legal reforms including adjustments to Section 230 of the US Communications Decency Act, and cautioned against the inadequacy of AI watermarking (link).
In a recent article, Arvind Narayanan and Sayash Kapoor argue that AI safety largely depends on the context and environment in which the AI is deployed. The article posits that focusing solely on making AI models “safe” overlooks the potential for misuse in varying contexts, as models lack the necessary information to discern between harmful and benign uses. They recommend shifting the focus of safety efforts away from model alignment alone and advocate for the development of external defenses against misuse. They also suggest a more nuanced assessment of marginal risk - risks that would otherwise not be present, a reorientation of red teaming towards identifying early warnings of misuse, and third-party-led red teaming to ensure unbiased risk assessments (link).
Opportunities
The UN Secretary-General's AI Advisory Body is seeking input on its recently launched Interim Report on AI Governance. The key questions for further discussion in the next phase of work by the Advisory Body focus on creating data protection frameworks and common standards for data labeling, promoting shared compute resources and determining the financing and capacity-building mechanisms necessary for effective international governance, among others. There's an emphasis on enhancing AI talent through international exchanges, joint academic programs, and cross-domain development, alongside leveraging international collaboration to support scientific research. Key challenges include reaching consensus on AI risks, especially concerning autonomous weapons, and establishing enforceable guidelines for ethical boundaries.