Hackers could use artificial intelligence tools to steal user passwords with near-perfect accuracy by “listening” to an unsuspecting person’s keystrokes, according to alarming results of a study published earlier this month.
A group of UK-based computer scientists trained an artificial intelligence model to identify the sounds generated by keystrokes on the 2021 version of a MacBook Pro — described as a “popular off-the-shelf laptop.”
When the AI program was enabled on a nearby smartphone, it was able to reproduce the typed password with a whopping 95% accuracy, according to the study results published by Cornell University.
The hacker-friendly AI tool was also extremely accurate while “listening” to typing though the laptop’s microphone during a Zoom video conference.
Researchers said it reproduced the keystrokes with a 93% accuracy – a record for the medium.
The researchers warned that many users are unaware of the risk that bad actors could monitor their typing to breach accounts – a type of cyberattack they called an “acoustic side channel attack.”
“The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output,” the study said.
“For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”
To gauge accuracy, the researchers pressed 36 of the laptop’s keys a total of 25 times each, with each press “varying in pressure and finger.”
The program was able to “listen” to identifying elements of each key press, such as sound wavelengths. The smartphone, an iPhone 13 mini, was placed 17 centimeters away from the keyboard.
The research was conducted by Joshua Harrison of Durham University, Ehsan Toreini of the University of Surrey and Maryam Mehrnezhad at Royal Holloway University of London.
The possibility of AI tools aiding hackers is just another risk factor for the burgeoning technology.
A number of notable experts, ranging from OpenAI founder Sam Altman to billionaire Elon Musk and others, have warned AI could pose a significant danger to humanity without proper guardrails in place.
This story originally appeared on NYPost