Meta and Google are among a group of seven Big Tech firms that have pledged to responsibly develop artificial intelligence amid mounting scrutiny of its potential risks to society.
The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – unveiled a set of voluntary commitments at a meeting with President Biden on Friday.
The pledges surfaced as the companies are locked in fierce competition to dominate the AI field, even as experts, including Elon Musk and OpenAI’s own CEO Sam Altman, warn that unchecked development could pose risks ranging from sweeping job losses to the spread of misinformation to the possible extinction of humanity.
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI,” the White House said in a fact sheet on the talks.
The broad agreement includes pacts to conduct extensive testing to ensure AI products are safe before releasing them to the public, putting cybersecurity checks in place, the introduction of a “watermarking system” to identify AI-generated content, and conducting more research on potential risk factors, such as user privacy and harmful bias in AI systems.
The commitments come amid highly mounting federal scrutiny over AI technology in the wake of ChatGPT’s surge in popularity over the last several months.
However, since lawmakers have yet to implement any formal legislation on the subject, the companies won’t face any consequences for violations as they compete to dominate the burgeoning field.
“We’re going to hold them accountable for their execution,” White House Chief of staff Jeff Zients told the Wall Street Journal. “Companies can and will need to do more than they’re doing now and so will the federal government.”
As The Post reported, the rise of AI-generated content could be a major problem ahead of the 2024 presidential election, with experts warning tech platforms are currently ill-equipped to deal with the influx.
In May, Altman and the “Godfather of AI” Geoffrey Hinton were among a group of AI leaders who signed a statement that said, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Altman also testified on Capitol Hill that the AI industry would benefit from government regulation – and admitted his worst fear was that advanced AI technology could “cause significant harm to the world” without proper guardrails.
Critics, meanwhile, argue that Altman and other AI doomsayers are incentivized to push for regulation because it would raise the barrier of entry for potential rivals and make it harder for them to compete with deep-pocketed industry leaders.
This story originally appeared on NYPost