Tuesday, November 26, 2024
HomeWorldOpenAI gives first look at Sora, an AI tool which creates video...

OpenAI gives first look at Sora, an AI tool which creates video from just a line of text | Science & Tech News


OpenAI has shared a first glimpse at a new tool that instantly generates videos from just a line of text.

Dubbed Sora after the Japanese word for “sky”, OpenAI’s tool marks the latest leap forward by the artificial intelligence firm, as Google, Meta and the startup Runway ML work on similar models.

The company behind ChatGPT said that Sora’s model understands how objects “exist in the physical world,” and can “accurately interpret props and generate compelling characters that express vibrant emotions”.

In examples posted on their website, OpenAI showed off a number of videos generated by Sora “without modification”. One clip highlighted a photorealistic woman walking down a rainy Tokyo street.

The prompts included that she “walks confidently and casually,” that “the street is damp and reflective, creating a mirror effect of the colorful lights,” and that “many pedestrians walk about”.

Another, with the prompt “several giant woolly mammoths approach treading through a snowy meadow”, showed the extinct animals near a mountain range sending up powdered snow as they walked.

One AI-generated video also showed a Dalmatian walking along window sills in Burano, Italy, while another took the viewer on a “tour of an art gallery with many beautiful works of art in different styles”.

Image:
Another video shows a Dalmation on a window sill in picturesque Burano, Italy. Pic: Sora

Videos generated directly by Sora
Pic:Sora
Image:
A tour of a gallery offers a glimpse of several works of art. Pic: Sora


Copyright and privacy concerns

But OpenAI’s newest tool has been met with scepticism and concern it could be misused.

Rachel Tobac, who is a member of the technical advisory council of the US’s Cybersecurity and Infrastructure Security Agency (CISA), posted on X that “we need to discuss the risks” of the AI model.

“My biggest concern is how this content could be used to trick, manipulate, phish, and confuse the general public,” she said.

Lack of transparency

Others also flagged concerns about copyright and privacy, with the CEO of non-profit AI firm Fairly Trained Ed Newton-Rex adding: “You simply cannot argue that these models don’t or won’t compete with the content they’re trained on, and the human creators behind that content.

“What is the model trained on? Did the training data providers consent to their work being used? The total lack of info from OpenAI on this doesn’t inspire confidence.”

Read more:
Fake AI-generated Biden tells people not to vote
Sadiq Khan: Deepfake almost caused ‘serious disorder’

OpenAI said in a blog post that it is engaging with artists, policymakers and others to ensure safety before releasing the new tool to the public.

“We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias – who will be adversarially testing the model,” the company said.

“We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.”

Videos generated directly by Sora
Pic:Sora
Image:
A tour of a gallery offers a glimpse of several works of art. Pic: Sora


OpenAI ‘cannot predict’ Sora use

However, the firm admitted that despite extensive research and testing, “we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it”.

“That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time,” they added.

The New York Times sued OpenAI at the end of last year over allegations it, and its biggest investor Microsoft, unlawfully used the newspaper’s articles to train and create ChatGPT.

The suit alleges that the AI text model now competes with the newspaper as a source of reliable information and threatens the ability of the organisation to provide such a service.

On Valentine’s Day, OpenAI also shared that it had terminated the accounts of five state-affiliated groups who were using the company’s large language models to lay the groundwork for hacking campaigns.

They said the threat groups – linked to Russia, Iran, North Korea and China – were using the firm’s tools for precursor hacking tasks such as open source queries, translation, searching for errors in code and running basic coding tasks.



This story originally appeared on Skynews

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments