After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.
The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.
On Tuesday night, the White House issued invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.
Generative AI, which has been advancing at breakneck speeds and setting off alarm bells among industry experts, spurred Biden to issue “guidance” last May. Vice President Kamala Harris also met with the CEOs of Google, Microsoft, and OpenAI — the creator of the popular ChatGPT chatbot— to discuss potential issues with genAI, which include security, privacy, and control problems.
Even before the launch of ChatGPT in 2022, the administration had unveiled a blueprint for a so-called “AI Bill of Rights” as well as an AI Risk Management Framework; it also pushed a roadmap for standing up a National AI Research Resource.
The new executive order is expected to elevate national cybersecurity defenses by requiring large language models (LLMs) — the foundation of generative AI — to undergo assessments before they can be used by US government agencies. Those agencies include the US Defense Department, Energy Department and intelligence agencies, according to the Post.
The new rules will bolster what was a voluntary commitment by 15 AI development companies to do what they could to ensure the evaluation of genAI systems that is consistent with responsible use.
“I’m afraid we don’t have a very good track record there; I mean, see Facebook for details,” Tom Siebel, CEO of enterprise AI application vendor C3 AI and founder of Siebel Systems, told an audience at MIT’s EmTech Conference last May. “I’d like to believe self-regulation would work, but power corrupts, and absolute power corrupts absolutely,” he said.
While genAI offers extensive benefits with its ability to automate tasks and create sophisticated text responses, images, video and even software code, the technology also has been known to go rogue — an anomaly known as hallucinations.
“Hallucinations happen because LLMs, in their in most vanilla form, don’t have an internal state representation of the world,” said Jonathan Siddharth, CEO of Turing, a Palo Alto, CA company that uses AI to find, hire, and onboard software engineers remotely. “There’s no concept of fact. They’re predicting the next word based on what they’ve seen so far — it’s a statistical estimate.”
GenAI can also unexpectedly expose sensitive or personally identifiable data. At its most basic level, the tools can gather and analyze massive quantities of data from the Internet, corporations, and even government sources in order to more accurately and deeply offer content to users. The drawback is that the information gathered by AI isn’t necessarily stored securely. AI applications and networks can make that sensitive information vulnerable to data exploitation by third parties.
Smartphones and self-driving cars, for example, track users’ locations and driving habits. While that tracking software is meant to help the technology better understand habits to more efficiently serve users, it also gathers personal information as part of big data sets used for training AI models.
GenAI is also vulnerable to baked in biases, such as AI-assisted hiring applications that tend to choose men versus women, or white candidates over minorities. And, as genAI tools get better at mimicking natural language, images and video, it will soon be impossible to discern fake results from real ones; that’s prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.
US efforts to reign in AI followed similar efforts by European countries to ensure the technology isn’t generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust. Italy outright banned further development of ChatGPT over privacy concerns after the natural language processing app experienced a data breach involving user conversations and payment information.
The European Union’s “Artificial Intelligence Act” (AI Act) was the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.
States and municipalities are eyeing restrictions of their own on the use of AI-based bots to find, screen, interview, and hire job candidates because of privacy and bias issues. Some states have already put laws on the books.
The White House is also expected to lean on the National Institute of Standards and Technology to tighten industry guidelines on testing and evaluating AI systems — provisions that would build on the voluntary commitments on safety, security and trust that the Biden administration extracted from 15 major tech companies this year on AI.
Copyright © 2023 IDG Communications, Inc.
This story originally appeared on Computerworld