Last month, Retool, a business software development company, fell victim to a sophisticated cyberattack that compromised 27 of its cloud customers.
The attack was a toxic cocktail of social engineering, AI deepfake technology, and a vulnerability in Google's Authenticator app.
The attacker initiated the breach by sending phishing SMS messages to Retool employees, posing as an IT team member addressing a payroll issue. While most employees ignored the message, one clicked on the URL, leading them to a fake login portal with multi-factor authentication (MFA).
Here's where it gets eerie: the hacker then called the employee using an AI-generated deepfake of a familiar voice from the IT team. Despite growing suspicion, the employee gave away an additional MFA code. This suggests the attacker had prior knowledge of the company, possibly indicating an earlier infiltration.
Once the MFA code was surrendered, the hacker gained access to the employee's GSuite account. This was particularly damaging because Google Authenticator's new cloud-syncing feature allowed the attacker to view MFA codes on multiple devices. Retool emphasized that this Google feature was a significant vulnerability, as compromising a Google account now also exposes all synced MFA codes.
Retool has since revoked the hacker's access and is sharing its experience to alert other companies. The incident underscores the evolving threats in cybersecurity, highlighting the need for a strong security culture and updated security procedures. Retool also urged Google to reconsider the cloud-syncing feature in its Authenticator app.
Deepfakes on the Rise: How to Fortify Your Cyber Defenses Now
The United States FBI, NSA, and CISA have released a joint report outlining the various social engineering threats posed by deepfakes.
“Threats from synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications, including National Security Systems (NSS), the Department of Defense (DoD), the Defense Industrial Base (DIB), and national critical infrastructure owners and operators,” the report says.
“As with many technologies, synthetic media techniques can be used for both positive and malicious purposes. While there are limited indications of significant use of synthetic media techniques by malicious state-sponsored actors, the increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication.”
The agencies conclude that organizations should use a combination of techniques and technologies to defend themselves against these attacks. “Organizations can take a variety of steps to identify, defend against, and respond to deepfake threats,” the report says.
“They should consider implementing a number of technologies to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications. Organizations can also take steps to minimize the impact of malicious deepfake techniques, including information sharing, planning for and rehearsing responses to exploitation attempts, and personnel training.”
The report adds, “Every organization should incorporate an overview of deepfake techniques into their training program. This should include an overview of potential uses of deepfakes designed to cause reputational damage, executive targeting and BEC attempts for financial gain, and manipulated media used to undermine hiring or operational meetings for malicious purposes. Employees should be familiar with standard procedures for responding to suspected manipulated media and understand the mechanisms for reporting this activity within their organization.”
NSA has the story.