Monday, November 25, 2024
HomeTechnologyBiden lays down the law on AI

Biden lays down the law on AI


In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

Along with security and safety measures, Biden’s edict addresses Americans’ privacy and genAI problems revolving around bias and civil rights. GenAI-based automated hiring systems, for example, have been found to have baked-in biases they can give some job applicants advantages based on their race or gender.

Using existing guidance under the Defense Production Act, a Cold War–era law that gives the president significant emergency authority to control domestic industries, the order requires leading genAI developers to share safety test results and other information with the government. The National Institute of Standards and Technology (NIST) is to create standards to ensure AI tools are safe and secure before public release.

“The order underscores a much-needed shift in global attention toward regulating AI, especially after the generative AI boom we have all witnessed this year,” said Adnan Masood, chief AI architect at digital transformation services company UST. “The most salient aspect of this order is its clear acknowledgment that AI isn’t just another technological advancement; it’s a paradigm shift that can redefine societal norms.”

Recognizing the ramifications of unchecked AI is a start, Masood noted, but the details matter more.

“It’s a good first step, but we as AI practitioners are now tasked with the heavy lifting of filling in the intricate details. [It] requires developers to create standards, tools, and tests to help ensure that AI systems are safe and share the results of those tests with the public,” Masood said.

The order calls for the US government to establish an “advanced cybersecurity program” to develop AI tools to find and fix vulnerabilities in critical software. Additionally, the National Security Council must coordinate with the White House chief of staff to ensure the military and intelligence community uses AI safely and ethically in any mission.

And the US Department of Commerce was tasked with developing guidance for content authentication and watermarking to clearly label AI-generated content, a problem that’s quickly growing as genAI tools become proficient at mimicking art and other content. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic — and set an example for the private sector and governments around the world,” the order stated.

To date, independent software developers and university computer science departments have led the charge against AI’s intentional or unintentional theft of intellectual property and art. Increasingly, developers have been building tools that can watermark unique content or even poison data ingested by genAI systems, which scour the internet for information on which to train.

Today, officials from the Group of Seven (G7) major industrial nations also agreed to an 11-point set of AI safety principles and a voluntary code of conduct for AI developers. That order is similar to the “voluntary” set of principles the Biden Administration issued earlier this year; the latter was criticized as too vague and generally disappointing.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” Biden’s executive order stated. “The Administration has already consulted widely on AI governance frameworks over the past several months — engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.”

Biden’s order also targets companies developing large language models (LLMs) that could pose a serious risk to national security, economic security, or public health; they will be required to notify the federal government when training the model and must share the results of all safety tests.

Avivah Litan, a vice president and distinguished analyst at Gartner Research, said while the new rules start off strong, with clarity and safety tests targeted at the largest AI developers, the mandates still fall short; that fact reflects the limitations of enforcing rules under an executive order and the need for Congress to set laws in place.

She sees the new mandates falling short in several areas:

  • Who sets the definition for ‘most powerful’ AI systems?
  • How does this apply to open source AI Models?
  • How will content authentication standards be enforced across social media platforms and other popular consumer venues?
  • Overall, which sectors/companies are in scope when it comes to complying with these mandates and guidelines?

“Also, it’s not clear to me what the enforcement mechanisms will look like even when they do exist. Which agency will monitor and enforce these actions? What are the penalties for non-compliance?” Litan said.

Masood agreed, saying even though the White House took a “significant stride forward,” the executive order only scratches the surface of an enmormous challenge. “By design it implores us to have more questions than answers — what constitutes a safety threat?” Masood said. “Who takes on the mantle of that decision-making? How exactly do we test for potential threats? More critically, how do we quash the hazardous capabilities at their inception?”

One area of critical concern the order attemps to address is the use of AI in bioengineering. The mandate creates standards to help ensure AI is not used to engineer harmful biological organisms — like deadly viruses or medicines that end up killing people — that can harm human populations.  

“The order will enforce this provision only by using the emerging standards as a baseline for federal funding of life-science projects,” Litan said. “It needs to go further and enforce these standards for private capital or any non-federal government funding bodies and sources (like venture capital).  It also needs to go further and explain who and how these standards will be enforced and what the penalties are for non-compliance.”

Ritu Jyoti, a vice president analyst at research firm IDC, said what stood out to her is the clear acknowledgement from Biden “that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,.”

Earlier this year, the EU Parliament approved a draft of the AI Act. The proposed law requires generative AI systems like ChatGPT to comply with transparency requirements by disclosing whether content was AI-generated and to distinguish deep-fake images from real ones.

While the US may have followed Europe in creating rules to govern AI, Jyoti said the American government is not necessarily behind its allies or that Europe has done a better job at setting up guardrails. “I think there is an opportunity for countries across the globe to work together on AI governance for social good,” she said.

Litan disagreed, saying the EU’s AI Act is ahead of the president’s executive order because the European rules clarify the scope of companies it applies to, “which it can do as a regulation — i.e., it applies to any AI systems that are placed on the market, put into service or used in the EU,” she  said.

Caitlin Fennessy, vice president and chief knowledge officer of the International Association of Privacy Professionals (IAPP), a nonprofit advocacy group, said the White House mandates will set market expectations for responsible AI through the testing and transparency requirements.

Fennessy also applauded US government efforts on digital watermarking for AI-generated content and AI safety standards for government procurement, among many other measures.

“Notably, the President paired the order with a call for Congress to pass bipartisan privacy legislation, highlighting the critical link between privacy and AI governance,” Fennessy said. “Leveraging the Defense Production Act to regulate AI makes clear the significance of the national security risks contemplated and the urgency the Administration feels to act.”  

The White House argued the order will help promote a “fair, open, and competitive AI ecosystem,” ensuring small developers and entrepreneurs get access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.

Immigration and worker visas were also addressed by the White House, which said it will use existing immigration authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the US, “by modernizing and streamlining visa criteria, interviews, and reviews.”

The US government, Fennessy said, is leading by example by rapidly hiring professionals to build and govern AI and providing AI training across government agencies.

“The focus on AI governance professionals and training will ensure AI safety measures are developed with the deep understanding of the technology and use context necessary to enable innovation to continue at pace in a way we can trust,” he said.

Jaysen Gillespie, head of analytics and data science at Poland-based AI-enabled advertising firm RTB House, said Biden is starting from a favorable position because even most AI business leaders agree that some regulation is necessary. He is likely also to benefit, Gillespie said, from any cross-pollination from the conversations Senate Majority Leader Chuck Schumer (D-NY) has held, and continues to hold, with key business leaders.

“AI regulation also appears to be one of the few topics where a bipartisan approach could be truly possible,” said Gillespie, whose company uses AI in targeted advertising, including re-targeting and real-time bidding strategies. “Given the context behind his potential Executive Order, the President has a real opportunity to establish leadership — both personal and for the United States — on what may be the most important topic of this century.”

Copyright © 2023 IDG Communications, Inc.



This story originally appeared on Computerworld

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments