In the wake of the rising hum of disinformation, AI-generated avatars are also poised to corrupt your systems, not only what’s left of your reality.
AI-generated avatars tricking users into installing info-stealing malware
By Bidisha Saha: Misinformation peddlers and hackers are embracing Artificial Intelligence (AI) tools to create convincing fake videos to lure users into tutorials to download cracked versions of licensed software products. A Bengaluru-based cyber intelligence firm Cloudsek has identified an increase in YouTube videos containing infostealers and malware links such as Vidar, RedLine, and Raccoon in the description box.
These YouTube videos feature advanced AI avatars or digital deepfakes in a whammy attempt of persuasive counterfeit that coaxes users into downloading information-stealing malware in the system. The deepfake technology has the potential to undermine the credibility of almost everything we see online. From mobile applications that can transpose people’s faces in blockbuster movies to fake statements made by public figures like Barack Obama, Elon Musk, Mark Zuckerburg etc. It has the ability to create ‘digital puppets’.
TARGET
There’s a greater affinity among the threat actors to target popular YouTube accounts with over 100k+ subscribers in an attempt to reach a large audience and add a sense of legitimacy to the video. But accounts which are less used or less popular are also taken over by the agents sometimes.
ACTION
Cloudsek noted that at least 5-6 videos of leaked software were immediately uploaded to the hacked YouTube account and used as bait to propel the users on clicking the malicious link in the description box.
METHODS OF DECEPTION
The YouTube algorithm and regulations team reviews the reports from the affected users and is quick in terminating such user accounts and taking the video down. Hence, this hacker group has been using some ingenious techniques to stay ahead in the game.
1. Using AI Avatars: A series of videos containing AI-generated personas are programmed to speak like humans with familiarity and conviction that makes it difficult to discern what’s real from the forgeries online. This new race of ‘digital puppets’ is mostly used for training videos, educational tutorials, promotional and other human resources purposes. But recent occurrences suggest the use of deepfake avatars to disseminate propaganda, spread disinformation, create fake identities and raise cybersecurity concerns.
There are multiple instances where these AI-generated avatars from platforms, such as Synthesia and D-ID, are being used in the videos on tutorials to download cracked software tools but containing encrypted links to the malware.
2. Masking Links: As observed in most cases, the malicious links planted in the description box of the YouTube videos are concealed using URL shorteners like bit.ly etc., links to file hosting platforms such as mediafire.com or links to download the malicious zip folder. Hence at the first glance, these links do not appear suspicious to the users.
3. Misleading Hashtags: The threat actors use an array of popular hashtags in an unrelated context and in different languages and specific to the location to ensure the targeted video appears on YouTube’s top results. The keywords are often random and aim at deceiving the YouTube algorithm to recommend the video to the most number of users.
4. Making the Video Look Legitimate: In an attempt to increase the legitimacy of the video, threat actors often infiltrate the comment section with comments suggesting that the videos worked for them etc. Cloudsek noted several targeted videos with identical comments added within hours after the videos were online – suggesting a rather coordinated action.
The threat actors plant links to malicious malware that aim at information stealing also called infostealers. These can steal passwords, credit card information, crypto wallet data and credentials, bank account numbers, Telegram data and other confidential data. Once installed on a system, they steal information from the computer and upload it to the attacker’s Command and Control server.
The investigation by Cloudsek also suggests that the Information Stealing developers are liable for developing and updating the malware code to ensure that antivirus and other endpoint detection systems do not detect the stealer when it is downloaded to a computer.
Pavan Karthick, a cybersecurity expert at Cloudsek, says, “These cybercriminals are targeting popular accounts to gain maximum exposure to their malicious content, and less popular accounts because the videos uploaded to them remain available for an extended period of time.”
“They’re also using SEO optimisation with region-specific tags to deceive the YouTube algorithm and ensure their videos appear as top results. Organisations and individuals must stay vigilant and report any suspicious activity to prevent further damage and mitigate the risk of falling victim to these attacks,” he adds.
Speaking to India Today on ways to counter attempts at persuasive counterfeit that threatens to blur the lines of fact and fiction, Karthick says, “As cyber criminals become more sophisticated in their tactics, relying solely on string-based rules is no longer a viable solution. Malware that dynamically generates and encrypts strings can easily evade traditional detection methods. To stay ahead of constantly evolving threats, organisations must embrace real-time adaptive threat monitoring, which involves closely monitoring threat actors’ changing tactics, techniques, and procedures. It’s crucial to educate users on how to identify potential threats and to enable multi-factor authentication. Remember, downloading pirated software is not worth the risk.”