US President Joe Biden has signed a sweeping executive order that establishes new safety standards for the development and use of artificial intelligence (AI) tools.
After years of non-binding agreements and ethical debates, the Biden administration has taken a step forward in its journey towards increasing regulatory oversight over AI technologies.
The US President has announced a new executive order that aims to drive the development of “safe, secure and trustworthy” AI. The law sets a series of safety assessments that all AI tools must follow, introducing new consumer protections and the need to ensure that AI respects equity and civil rights.
The new law will require companies developing foundational models that “pose a serious risk to national security, national economic security or national public health and safety” to notify the government of their activities and share the results of all red-team safety tests they conduct.
OpenAI’s GPT and Meta’s Llama 2 would be among the models affected by this requirement, although a senior Biden administration official told reporters in a briefing that the guidelines are expected to be applied in the development of future models.
“We’re not going recall publically available models that are out there,” the official said. “Existing models are still subject to the anti-discrimination rules already in place.”
In addition, the order tasks the Department of Energy and Department of Homeland Security with creating standards that prevent the use of AI tools for malicious activities, such as engineering biological, radiological, nuclear or cyber attacks.
Meanwhile, the Department of Health and Human Services was asked to evaluate potentially harmful AI-related healthcare practices and the Department of Commerce was tasked with developing guidance for AI watermarking and protecting consumer privacy.
The document also provides guidance to landlords and federal contractors to help avoid bias in AI tools, mentioning specifically the need to establish best practices for the use of the technology in the justice system, including in sentencing, risk assessments and crime forecasting.
With the goal of addressing the impact of AI on the jobs market, the order stresses the need for a report on the impact of the technology on work practices and ordered the launch of a National AI Research Resource to provide key information to students and AI researchers and access to technical assistance for small businesses. It also directed the rapid hiring of AI professionals for the government, with new positions already being advertised.
Moreover, the document establishes the administration’s desire to expand grants for AI research in areas such as climate change.
The order was described as “the strongest set of actions any government in the world has ever taken on AI safety, security and trust” by White House deputy chief of staff Bruce Reed, chair of a newly formed White House AI Council.
Navrina Singh, founder of Credo AI and a member of the National Artificial Intelligence Advisory Committee, also celebrated the move. “It’s the right move for right now because we can’t expect policies to be perfect at the onset while legislation is still discussed. I do believe that this really shows AI is a top priority for government.”
The order is expected to be implemented over the next year, according to officials.
The Biden administration has been very vocal about its concerns over the rapid development of generative AI, and even unveiled a new AI Bill of Rights, which outlines five protections internet users should have in the AI age.
Last week, the UK government published a report on the capabilities and risks that AI poses to the UK’s society and economy. The country will be hosting the first global AI Safety Summit on 1-2 November.