Welcome to Eye on AI! In this edition...xAI releases Grok 4 amid backlash over antisemitic posts…Perplexity launches AI browser to take on Google—starting with its power users…OpenAI is reportedly gearing up to launch its own web browser.
An audio deepfake impersonating Secretary of State Marco Rubio contacted foreign ministers, a U.S. governor, and a member of Congress with AI-generated voicemails mimicking his voice, according to a senior U.S. official and a State Department cable dated July 3.
There’s no public evidence that any of the recipients of the messages, reportedly designed to extract sensitive information or gain account access, were fooled by the scam. But the incident is the latest high-profile example of how easy—and alarmingly convincing—AI voice scams have become.
With just 15 to 30 seconds of someone’s speech uploaded to services like Eleven Labs, Speechify and Respeecher, it’s now possible to type out any message and have it read aloud in their voice. Keep in mind, these are tools used perfectly legitimately for a host of things from accessibility to content creation – but like many AI technologies, can be misused by bad actors.
The threat of deepfakes has escalated
AI-generated deepfakes aren’t new, particularly of C-suite leaders and public officials, but they are becoming a bigger problem. Eight months ago, I reported that more than half of chief information security officers (CISOs) surveyed ranked video and audio deepfakes as a growing concern. That threat has only escalated. A new study by Surfshark found that in the first half of 2025 alone, deepfake-related incidents surged to 580—nearly four times as many as in all of 2024 (150 incidents), and dramatically higher than the 64 incidents reported between 2017 and 2023. Losses from deepfake fraud have also skyrocketed, reaching $897 million cumulatively, with $410 million of that in just the first half of 2025. The most common scheme: impersonating public figures to promote fraudulent investments, which has already resulted in $401 million in losses.
“Deepfakes have evolved into real, active cybersecurity threats,” Aviad Mizrachi, CTO and co-founder of software security company Frontegg, told me by email. “We’re already seeing AI-generated video calls successfully trick employees into authorizing multimillion-dollar payments. These attacks are happening now, and it’s a scam that is becoming alarmingly easy for a hacker to deploy.”
Part of the problem, Mizrachi added, is that traditional authentication methods—usernames, passwords, one-time codes, and authenticator apps—weren’t designed for a world where a scammer can clone your voice or face in seconds. That’s because these scams don’t necessarily involve breaking into an account—they rely on tricking a real person into handing over credentials or authorizing actions themselves.
“Those traditional security measures to check the identity of an individual obviously don’t work anymore,” he said, adding that most cybersecurity teams still overlook deepfakes—and that’s the vulnerability attackers exploit. A convincing fake voice on a voice mail message or video call can persuade someone to bypass normal procedures or approve a wire transfer, even if all the authentication tools are technically in place.
To guard against that kind of deception, Mizrachi said, organizations need to deploy stronger security tools that rely on physical devices—like a smartphone or hardware security key—to prove someone’s identity. These tools, known as FIDO2 or WebAuthn passkeys, are far harder for hackers to fake or phish. And beyond device checks, smart verification systems can also monitor behavioral signals—like typing speed, location, or login habits—to spot anomalies that a cloned voice can’t imitate. Those extra layers make it much harder for a deepfake attack to succeed.
Margaret Cunningham, director of security and AI strategy at security firm Darktrace, said that the impersonation attempt of Rubio demonstrates just how easily generative AI can be used to launch convincing, targeted social engineering attacks.
“This threat didn’t fail because it was poorly crafted—it failed because it missed the right moment of human vulnerability,” she said. “People often don’t make decisions in calm, focused conditions. They respond while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.”
Deepfakes have impacted democracies around the world
Generative AI has also dramatically lowered the barrier to entry when it comes to media manipulation—making it faster, cheaper, and more scalable than ever before. And it is impacting democracies around the world: A recent New York Times report found that AI-powered deepfakes has transformed elections in at least 50 countries around the world, by using them to demean or defame opponents.
“This is the new frontier for influence operations,” Leah Siskind, AI research fellow at the Foundation for Defense of Democracies, told me. “We’ve seen other instances of deepfakes of senior government officials used to gain access to personal accounts, but leveraging AI to influence diplomatic relationships and decision-making is a dangerous escalation. This is an urgent national security issue with serious diplomatic ramifications.”
For now, Siskind recommends that government officials steer clear of popular encrypted platforms like Signal, which while secure in terms of content, lack mechanisms for identity verification. “Given the ease of creating deepfake audio and building out realistic-looking accounts on any consumer-grade messaging app, senior government officials should stick to secure communication channels,” she said.
Note: Check out this new Fortune video about my tour of IBM’s quantum computing test lab. I had a fabulous time hanging out at IBM’s Yorktown Heights campus (a midcentury modern marvel designed by Eero Saarinen, the same guy as the St. Louis Arch and the classic TWA Flight Center at JFK Airport) in New York. The video was part of my coverage for this year’s Fortune 500 issue that included an article that dug deep into IBM’s recent rebound.
As I said in my piece, “walking through the IBM research center is like stepping into two worlds at once. There are the steel and glass curves of Saarinen’s design, punctuated by massive walls made of stones collected from the surrounding fields, with original Eames chairs dotting discussion nooks. But this 20th-century modernism contrasts starkly with the sleek, massive, refrigerator-like quantum computer—among the most advanced in the world—that anchors the collaboration area and working lab, where it whooshes with the steady hum of its cooling system.”
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
AI IN THE NEWS
xAI releases Grok 4 amid backlash over antisemitic posts. Elon Musk’s xAI has launched Grok 4, just months after its previous release, highlighting the rapid pace of AI development. Unveiled during a late-night livestream, Musk claimed the new chatbot outperforms most graduate students across disciplines and now supports improved voice interactions. xAI also touted benchmark results showing Grok 4 beating rivals like OpenAI. While Musk admitted the bot sometimes lacks common sense and hasn’t yet made scientific breakthroughs, he added, “that is just a matter of time.” Grok 4’s release comes just a day after xAI faced backlash for antisemitic content generated by the chatbot on X. The company removed the offensive posts and said it has since taken steps to block hate speech before Grok’s responses are published on the platform.
Perplexity launches AI browser to take on Google—starting with its power users. Perplexity is stepping up its challenge to Google with the launch of Comet, its first AI-powered web browser. Available initially to $200/month Max subscribers and a limited waitlist, Comet integrates Perplexity’s signature AI search engine front and center—offering summarized answers instead of traditional search results. The browser also debuts Comet Assistant, an in-browser AI agent that can summarize emails and calendar events, manage tabs, and interact with webpages in real time. CEO Aravind Srinivas has framed Comet as more than just a browser—it’s a stepping stone to what he calls an “AI operating system” designed to deeply embed Perplexity into users’ daily workflows. The move puts Perplexity in more direct competition with Chrome and Google’s own AI search experiments, as the startup bets on its browser becoming the new gateway for how people find and interact with information online.
OpenAI is reportedly gearing up to launch its own web browser. According to Reuters, OpenAI is also joining the browser wars as it moves to compete more directly with Google, and now Perplexity. Citing three sources familiar with the plans, the report says the browser could debut in the coming weeks and is designed to keep some user interactions within a ChatGPT-style interface—rather than directing users to external websites. The move would also give OpenAI access to the kind of user data that has long powered Google’s dominance in search.
FORTUNE ON AI
Apple’s AI efforts ‘have struck midnight’ and the only way it can stop getting further behind is acquiring Perplexity, analyst Dan Ives says —by Marco Quiroz-Guitierrez
Would you replace your CEO with an AI avatar? —by Alexandra Sternlicht
Why Coca-Cola’s CIO prioritizes big-impact AI pilot projects —by John Kell
Amazon’s tariff-clouded, seller-confused, AI-researched, weirdest Prime Day ever —by Jason Del Rey
AI CALENDAR
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
EYE ON AI NUMBERS
80%
That’s how much business software will be powered by AI that understands more than just text—including images, video, audio, and other data—by 2030, according to a new report from Gartner. That’s a huge jump from less than 10% today.
This shift, known as “multimodal AI,” could change how businesses operate across industries like healthcare, finance, and manufacturing. For example, it could help software make smarter decisions by analyzing a mix of information (like medical images and patient notes), or even take proactive steps—like flagging fraud or optimizing supply chains—without human input.
Gartner analysts say companies building software will need to start investing now in these new AI technologies to stay competitive and deliver real value to their customers.
This story originally appeared on Fortune