If you thought yesterday cleared the deck on AI and politics, well, it turns out we were both wrong.
The UK and US are set to announce a partnership on AI safety, exchanging analysis through new AI safety institutes. The collaboration will combine the White House’s new guardrails on AI development with the existing work of the UK’s Frontier AI Taskforce. The institutes will create guidelines, standards, and best practices for evaluating and mitigating risks associated with AI. Both countries will also participate in information sharing and research collaboration, with the US sharing information with similar safety institutes in other countries. The UK will establish an AI Safety Institute to examine and test new types of AI technology. The partnership aims to address risks ranging from catastrophic risks to societal harms such as bias and misinformation.
Vice President Kamala Harris announced the establishment of the United States AI Safety Institute to protect American consumers from potential harm caused by AI. The institute will create guidelines, benchmark tests, and best practices for testing and evaluating dangerous AI systems. The Biden administration is also addressing responsible AI adoption in military and international applications.
Governments from six continents, including the U.S., China, and the European Union, have reached an agreement known as the Bletchley Declaration to limit the risks and harness the benefits of artificial intelligence (AI). The declaration calls for policies across borders to prevent risks and supports internationally inclusive research on advanced AI models. It also emphasizes the importance of international cooperation and working through existing organizations to ensure responsible and trustworthy AI. The agreement was made at the AI Safety Summit held at Bletchley Park, with plans for future summits in South Korea and France.
Why do we care?
I covered a lot of the why yesterday, and just want to observe the massive rollout of governments involvement in just a few short days – and the EU isn’t even done with theirs.
The multi-nation agreements suggest that universal standards for AI safety are on the horizon. MSPs should prepare for these guidelines to affect services they offer that involve AI components. Given the rapid pace of government involvement, MSPs should have a strategy to continuously update their compliance mechanisms and service offerings.