Wednesday, November 20, 2024
HomeFinanceGoogle-backed Anthropic debuts Claude 3, its most powerful chatbot yet

Google-backed Anthropic debuts Claude 3, its most powerful chatbot yet


Anthropic on Monday debuted Claude 3, a suite of artificial intelligence models that it says are its fastest and most powerful yet. The new tools are called Claude 3 Opus, Sonnet and Haiku.

The company said the most capable of the new models, Claude 3 Opus, outperformed OpenAI’s GPT-4 and Google’s Gemini Ultra on industry benchmark tests, such as undergraduate level knowledge, graduate level reasoning and basic mathematics.

This is the first time Anthropic has offered multimodal support. Users can upload photos, charts, documents and other types of unstructured data for analysis and answers.

The other models, Sonnet and Haiku, are more compact and less expensive than Opus. Sonnet and Opus are available in 159 countries starting Monday, while Haiku will be coming soon, according to Anthropic. The company declined to specify how long it took to train Claude 3 or how much it cost, but it said companies like Airtable and Asana helped A/B test the models.

This time last year, Anthropic was seen as a promising generative AI startup founded by ex-OpenAI research executives. It had completed Series A and B funding rounds, but it had only rolled out the first version of its chatbot without any consumer access or major fanfare.

Twelve months later, it’s one of the hottest AI startups, with backers including Google, Salesforce and Amazon, and a product that directly competes with ChatGPT in both the enterprise and consumer worlds. Over the past year, the startup closed five different funding deals, totaling about $7.3 billion.

The generative AI field has exploded over the past year, with a record $29.1 billion invested across nearly 700 deals in 2023, a more than 260% increase in deal value from a year earlier, according to PitchBook. It’s become the buzziest phrase on corporate earnings calls quarter after quarter. Academics and ethicists have voiced significant concerns about the technology’s tendency to propagate bias, but even so, it’s quickly made its way into schools, online travel, the medical industry, online advertising and more.

Between 60 and 80 people worked on the core AI model, while between 120 and 150 people worked on its technical aspects, Anthropic co-founder Daniela Amodei told CNBC in an interview. For the AI model’s last iteration, a team of 30 to 35 people worked directly on it, with about 150 people total supporting it, Amodei told CNBC in July.

Anthropic said Claude 3 can summarize up to about 200,000 words, or a sizeable book (think: around the length range of “Moby Dick” or “Harry Potter and the Deathly Hallows”). Its previous version could only summarize 75,000 words. Users can input large data sets, and ask for summaries in the form of a memo, letter or story. ChatGPT, by contrast, can handle about 3,000 words.

Amodei also said Claude 3 has a better understanding of risk in responses than its previous version.

“In our quest to have a highly harmless model, Claude 2 would sometimes over-refuse,” Amodei told CNBC. “When somebody would kind of bump up against some of the spicier topics or the trust and safety guardrails, sometimes Claude 2 would trend a little bit conservative in responding to those questions.”

Claude 3 has a more nuanced understanding of prompts, according to Anthropic.

Multimodality, or adding options like photo and video capabilities to generative AI, whether uploading them yourself or creating them using an AI model, has quickly become one of the industry’s hottest use cases.

“The world is multimodal,” OpenAI COO Brad Lightcap told CNBC in November. “If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things — the world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.”

But multimodality, and increasingly complex AI models, also lead to more potential risks. Google recently took its AI image generator, part of its Gemini chatbot, offline after users discovered historical inaccuracies and questionable responses, which have circulated widely on social media.

Anthropic’s Claude 3 does not generate images; instead, it only allows users to upload images and other documents for analysis.

“Of course no model is perfect, and I think that’s a very important thing to say upfront,” Amodei told CNBC. “We’ve tried very diligently to make these models the intersection of as capable and as safe as possible. Of course there are going to be places where the model still makes something up from time to time.”



This story originally appeared on CNBC

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments