ChatGPT-creator OpenAI is readying a team to prevent what the company calls frontier AI models from starting a nuclear war and other threats.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks,” the company said in a blog post.
The team, dubbed Preparedness, which Aleksander Madry will lead, will look to minimize risks posed by these continuously evolving AI models it added.
These “catastrophic” risks include individualized persuasion, cybersecurity, chemical, biological, radiological, and nuclear threats, and autonomous replication and adaptation.
The new team, according to the company, is expected to connect capability assessment, evaluations, and internal red teaming for frontier models, from the models it develops in the near future to those with Artificial General Intelligence-level capabilities.
The Preparedness team will also be responsible for developing a risk-informed development policy (RDP).
“Our RDP will detail our approach to developing rigorous frontier model capability evaluations and monitoring, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across that development process,” the company said.
The RDP is also meant to complement and extend the company’s existing risk mitigation work, it added.
OpenAI is also looking to hire “exceptional talent from diverse technical backgrounds” for the new team.
Further, the company said that it was launching an AI Preparedness Challenge for catastrophic misuse prevention.
“We will offer $25,000 in API credits to up to 10 top submissions, publish novel ideas and entries, and look for candidates for Preparedness from among the top contenders in this challenge,” the company said.
In July, OpenAI opened a new alignment research division, focused on developing training techniques to stop superintelligent AI — artificial intelligence that could outthink humans and become misaligned with human ethics — from causing severe harm.
Around the same time, the company joined other leading AI labs to make voluntary commitments to promote safety, security, and trust in AI.
Prior to that in May, hundreds of tech industry leaders, academics, and other public figures signed an open letter warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.
Copyright © 2023 IDG Communications, Inc.
This story originally appeared on Computerworld