How do you regulate something that can help but also harm people, something that affects all sectors of the economy and changes at such a rapid pace that even experts can’t keep up with it? This has been the main challenge for governments in the field of artificial intelligence. If they take too long to regulate AI, they could miss the opportunity to do something to prevent the potential dangers and misuse of the technology that pose a great risk. If they react too quickly, they run the risk of drafting harmful or detrimental rules, stifling innovation, or ending up in the same position as the European Union. This union of nations announced its first AI Law in 2021, just before the arrival of a wave of new generative AI tools that made much of that legislation obsolete (the proposal, which has not yet been approved, was later modified to forcibly include part of the new technology, but it remains somewhat clumsy).
On Monday, the White House announced its own project to govern the fast-paced world of AI: a broad-reaching executive order that imposes new rules on companies and instructs various federal agencies to begin establishing safeguards around the technology. The Biden administration, like others, has been under great pressure to take action on technology since late last year, when ChatGPT and other generative AI applications suddenly appeared in the public consciousness. AI companies have sent their executives to testify before Congress and inform lawmakers about the promises and drawbacks of the technology, while some activist groups have urged the federal government to take strong action to prevent dangerous uses of AI, such as the production of new cyber weapons and the generation of deceptive ultra-fake images by computer.
In addition, a cultural battle has erupted in Silicon Valley: some researchers and experts urge the AI industry to slow down, while others want them to accelerate at full speed. President Joe Biden’s executive order seeks a middle ground that allows the development of AI without major disruptions, but with some moderate rules, and also communicates that the federal government is determined to keep a close eye on the AI industry in the coming years. In contrast to social media, technology that was allowed to grow unrestricted for over a decade before regulators showed any interest in it, this order shows that the Biden administration has no intention of letting artificial intelligence go unnoticed.
The comprehensive executive order, with over 100 pages, seems to have something for everyone. Those most concerned about AI security, such as those who signed an open letter this year stating that AI represents a “risk of extinction” similar to a pandemic or nuclear weapons, will be pleased to know that the order imposes new requirements on companies building powerful AI systems. In particular, companies that manufacture the largest AI systems will have to notify the government of their security tests and share their results before releasing models to the public. These information requirements will apply to models that exceed a specific level of computing power – integers greater than 100 quadrillion or floating-point operations, in case you were curious – which will likely include next-generation models developed by OpenAI, Google, and other major AI technology companies. These requirements will be enforced under the Defense Production Act, a 1950 legislation that gives the president broad authority to compel U.S. companies to support actions deemed important to national security. This could give these instructions some force that previous voluntary commitments by the government regarding AI did not have.
In addition, the order requires cloud service providers that rent computers to AI developers (a list that includes Microsoft, Google, and Amazon) to provide the government with information about their foreign customers. Not only that, it also instructs the National Institute of Standards and Technology to prepare standardized tests to measure the performance and security of AI models. The executive order also contains some provisions that will please the group advocating for ethics in artificial intelligence, made up of activists and researchers concerned about the short-term harms of AI, such as biases and discrimination, and who think that fears of long-term extinction due to AI are exaggerated. In particular, the order states that federal agencies must take steps to prevent AI algorithms from exacerbating discrimination in housing programs, federal benefits, and the criminal justice system. It also assigns the Department of Commerce the task of creating a guide to include a digital watermark in AI-generated content, which could help crack down on the spread of AI-generated misinformation.
But what do AI companies, to whom these rules are directed, think about it? Several executives I spoke to on Monday seemed relieved that the White House order does not require them to register and obtain a license to train large AI models, a proposal that some industry members had criticized as draconian. It also does not require them to withdraw any of their current products from the market or force them to disclose the type of information they have tried to keep private, such as the size of their models and the methods used to train them. It also does not attempt to reduce the use of copyright-protected data to train AI models, a common practice that has been attacked by artists and other creative workers in recent months and is currently being litigated.
Furthermore, technology companies will benefit as the order seeks to relax immigration restrictions and streamline the visa application process for workers with specialized AI knowledge as part of a national push for “AI talent”. Of course, not everyone will be pleased. Radical activists concerned about security may have liked the White House to set stricter limits on the use of large AI models or blocked the development of open-source models, whose code anyone can download and freely use. And some AI enthusiasts may be bothered that the government is even doing anything to limit the development of a technology they generally consider to be good.
But the executive order seems to strike a convenient balance between pragmatism and caution and, given that Congress has not passed comprehensive AI regulations, it appears to be the clearest safeguards we will have in the near future. There will be other attempts to regulate AI, especially in the European Union, where the AI Law could be passed as early as next year, and in the United Kingdom, where a summit of world leaders was held this week and new initiatives to control AI development were expected. The White House executive order is a sign that it intends to act quickly. The question, as always, is whether AI itself will move faster.