Monday, November 25, 2024
HomeTechnologyGlobal AI Safety Summit shows need for collaborative approach to risks

Global AI Safety Summit shows need for collaborative approach to risks


After months of buildup, the world’s first AI Safety Summit  came to a close yesterday after two days of discussions brokered by the UK  and including representatives from leading AI companies, governments, and industry stakeholders.

One  result to emerge from the summit was the signing of the so-called Bletchley Declaration, which saw 28 governments including China, the US, and EU agree to work together on AI safety. It was a positive outcome because it shows there is a global understanding that individual countries can not deal with the threat of AI in isolation, said University of Warwick Assistant Professor Shweta Singh, whose research includes ethical and responsible AI.

“To fight the risk from AI, it can only happen through collaboration, and not just collaboration between one or two countries, it has to be an international effort,” she said. “[The Declaration] is the first acknowledgement that this is the only way to actually fight the risks of AI and therefore mitigate those risks moving forward.”

However, the only actual agreement the declaration contains is the promise to keep talking, rather than a commitment to any overarching regulation — an issue where the divisions between nations appears to be the most stark.

The UK government is continuing to take a “wait and see” approach to regulation, arguing that with the current pace of development, it would be difficult to put forward legislation as it would likely be ineffective almost as soon as it was passed into law. Furthermore, much of the pre-summit talking points put forth by the UK focused on some of the more headlin-grabbing, existential threats, including AI’s possible ability to develop biological and chemical weapons — threats that even goverment officials had to admit were worst-case or highly unlikely scenarios.

On the contrary, the US AI Bill of Rights, an executive order signed by US President Joe Biden ahead of the summit on Monday, seeks to tackle the immediate risks presented by AI, such as bias, discrimination, and misinformation.

Addressing these issues at the US Embassy in London, Vice President Kamala Harris said that while existential threats such as AI-enabled cyberattacks and AI-formulated bio-weapons are profound and demand global action, there are additional problems  that are currently causing harm and are already being seen by some as existential.

“When people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation… is that not existential for democracy?” Harris said. “To define AI safety, I offer that we must consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.”

Singh said that while she can understand the wait-and-see argument being put forward by the UK government, that doesn’t mean that the country should just sit back and let AI continue to develop without any guardrails in place.

She also believes that UK Prime Minister Rishi Sunak fails to grasp what Harris and Biden clearly have — that the threats from bias, discrimination, and disinformation are not coming down the road but are instead already impacting on peoples’ lives.

“The risk which we have now, I don’t see that actually being talked about [by the UK government],” Singh said. “[The government] is looking at it as if this is something which is going to affect us, saying ‘we’ll need to tame the beast’ but the point is, the beast is already in the room.”

Industry representatives dominated the event

While there were around 100 attendees at the summit, concerns were raised about the overrepresentation of some groups. One third of the guests were from the private sector and the attendee list skewed heavily Western, with 60% of those at Bletchley Park coming from the UK or US. There was also an extremely minimal civil society participation, and no human rights or media watchdog organizations present.

Furthermore, at the session that  focused on the risks from integration of frontier AI into society, one of which is how AI could disrupt jobs and industries, not a single representative for workers rights was in attendance.

“Big tech dominated the room — Elon Musk, for example, was a major distraction, and the very few media there weren’t even able to ask questions,” said Michael Bak, executive director of the Forum on Information and Democracy. “We cannot allow those who make, market and exploit AI for private gain to wield more influence than other critical civil society stakeholders.”

Bak also said that the announced UK-based global hub —  charged with testing the safety of emerging AI applications — and attendance in the room notably lacked meaningful input from Southern Hemisphere countries, something that should not have been allowed to happen given that AI will impact all democracies and humanity.

“Fifty-one democracies already support the International Partnership and Forum for Information and Democracy, an innovative international framework that ensures technology lives in the house of democracy and not the other way around,” said Bak. “Such inclusive frameworks are stronger and more credible, and thus more effective in safeguarding our democracies and meeting the needs and aspirations of people around the world.”

What’s next for global cooperatin on AI?

One tangible outcome from this week’s summit was the commitment from South Korea and France to both host their own international AI Safety Summit in 2024. Furthermore, both the UK and US governments have also committed to launching their own AI Safety Institutes, focused on advancing AI safety for the public interest, a move that Singh believes more countries will do.

While regulation might still feel like  a long way off, Singh said in the short term, there are things that governments can be doing to combat  current harms.

“[These harms are] happening right now that we need to tackle but that doesn’t always have to be done through regulation,” she said. “For example, watermarking technology can be used to combat deepfakes and help stop the spread of misinformation and that’s something that doesn’t require any government to pass a law.”

Ultimately, the biggest concrete outcome of the week was the unveiling of the US government’s AI Bill of Rights, which although not directly connected to the summit, Singh argues was likely pushed forward to coincide with the event.

The issues outlined by the Biden Administration in the document are ideals that Singh believes all governments could and should get behind, providing a truly universal approach to tackling AI harms.

“As we go forward, we will hopefully see each nation adopting these pillars, or at least something that is similar,” she said.

Copyright © 2023 IDG Communications, Inc.



This story originally appeared on Computerworld

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments