Deceptive AI: A New Wave of Cyber Threats



blog.knowbe4.comhubfssocial-suggested-imagesblog.knowbe4.comhubfsSocial Image RepositoryEvangelist Blog Social GraphicsEvangelists-Javvad Malik-1.As artificial intelligence (AI) technology advances, its influence on social media has become more and more pervasive and riddled with challenges.

In particular, the ability for humans to discern genuine content from AI-generated material.

Our recent survey conducted with OnePoll on over 2,000 UK workers found that a substantial portion of social media users are struggling to navigate this new digital frontier.

Alarmingly, one-third of respondents admitted they are not confident in their ability to spot AI-generated content on social platforms, while half of those surveyed reported frequently encountering fraudulent accounts that seem to be using AI-generated images, messages, or other content to deceive users.

A further 28% admitted thinking that AI would generally have a negative impact on the security of data, information and interactions on social media. 

This growing trend raises important questions about the implications for user trust, security and the future of online interactions.

An Uptick in AI-Generated Content on Social Media
AI-generated content (or AIGC as it’s now been coined) is becoming more sophisticated and harder to detect. From deepfake videos to AI-crafted text, the line between real and artificial is increasingly blurred. These technologies can create images of people who don’t exist, generate hyper-realistic videos and produce text that mimics human writing with remarkable accuracy.  What’s more, AIGC has the power to influence - potentially shaping people’s perceptions, political views and ideals.

While AI's capabilities are impressive, it’s easy to see how the explosion in growth is likely to become problematic. Mark Zuckerberg, owner of Meta, stated in 2022 that 15% of feed content was AI generated and that the company expected these numbers to more than double by the end of 2023 - it’s not hard to see why some people are struggling to recognise AIGC. 

The difficulty in identifying AI-generated content is not just a matter of technical know-how. The very nature of AI content is designed to appear authentic, often fooling even the most discerning users. For example, AI can generate fake profiles with convincing photos, bios and posts that mimic human behavior. These accounts can be used for various deceptive purposes, from spreading misinformation to scamming unsuspecting users into making purchases or investments.

The Threat of Fraudulent Accounts
The prevalence of fraudulent accounts on social media is a further area for concern. According to the survey, half of the respondents reported that they often or sometimes encounter accounts that they believe are using AI-generated content to deceive others. These fraudulent accounts can be involved in a range of malicious activities, including phishing scams, identity theft and the dissemination of false information.

AI-generated images, in particular, pose a unique challenge. With AI, it's now possible to create images of people who look entirely real but don’t actually exist. These images can be used to create fake profiles that seem genuine at first glance, making it easier for bad actors to take advantage of users - from spoofing accounts of family and friends, to elaborate romance scams.

Moreover, AI can generate content at scale, allowing fraudsters to create thousands of fake accounts with minimal effort, mostly designed to exploit people and their inherent need to connect with or be influenced by others. 

The Impact on Trust
This mounting presence of AI-generated content and fraudulent accounts on social media has significant implications for user trust. Social media platforms thrive on the trust that users place in the content they see and the interactions they have. However, when users begin to doubt the authenticity of what they encounter online, it can erode trust in the platform as a whole. 

For businesses and influencers, this erosion of trust can be particularly damaging. Brands that rely on social media for marketing and customer engagement may find it harder to connect with their audience if users are increasingly skeptical of the content they see. Similarly, influencers who build their reputation on authenticity may struggle to maintain their credibility in a platform where fake content starts to take over.

What Can Be Done?
It is of course an exciting new era of AI - we’ve all seen friends, family or colleagues try out AI filters or use AI picture generators for a bit of fun on social media. However, cybercriminals will also exploit these new technologies for their own gains, so it’s imperative that awareness is built around AIGC and spotting potentially harmful content. Tackling the issue will, of course, require a multi-pronged approach.

For instance, social media platforms need to take a lead in investing in advanced detection tools that can identify AI-generated content and fraudulent activity. This could include systems that are able to spot the subtle inconsistencies in AI-generated images or text, or more robust verification processes for accounts to ensure there is an actual person behind them.

In addition to platform-level solutions, there is a critical need for user education and awareness. Social media users need to be equipped with the knowledge and tools to spot fake content and protect themselves from fraud. This might include educational campaigns that teach users about the common signs of AI-generated content, the red flags to look out for and the risks of interacting with dubious accounts.

Moreover, regulatory frameworks may need to be updated to address the unique challenges posed by AI-generated content. Governments and regulatory bodies will no doubt play a role in setting standards for transparency and accountability in the use of AI on social media. This might include requiring platforms to disclose when content is AI-generated or holding platforms accountable for the spread of fraudulent accounts.

Adapting and Advancing
AI’s impact on social media is likely to grow and it might become a case of getting worse before it gets better. But with a little preparation, a healthy dose of skepticism and some user awareness, there doesn’t need to be a compromise on trust and security.

Though a third of users lack confidence in their ability to identify AI-generated content and half frequently encounter fraudulent accounts, it is clear that both users and platforms need to adapt to this new reality. By investing in detection tools, educating users and paying attention to updates in regulatory frameworks, we can fight deception and work towards an online world that is safer and more secure in the age of AI.


Get Your Free Phishing Security Resource Kit

Phishing emails increase in volume every month and every year, so we created this free resource kit to help you defend against attacks. Request your kit now to learn phishing mitigation strategies, what new trends and attack vectors you need to be prepared for, and our best advice on how to protect your users and your organization.

Phishing-Kit-Resources-ImageHere's what you'll get:

  • Access to our free on-demand webinar Your Ultimate Guide to Phishing Mitigation featuring Roger A. Grimes, KnowBe4’s Data-Driven Defense Evangelist
  • Our most popular phishing whitepaper: Comprehensive Anti-Phishing Guide E-Book
  • A video that explains How to Avoid Phishing Attacks
  • Our most recent quarterly infographic on Top-Clicked Phishing Email Subjects Infographic 
  • Posters and digital signage to remind users about what to watch out for 

Get Your Kit Now!

PS: Don't like to click on redirected buttons? Cut & Paste this link in your browser:

https://www.knowbe4.com/phishing-resource-kit 

Topics: Phishing



Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews