One in three UK consumers believe that over half of all advertisements on websites or social media sites are generated by AI
According to independent research conducted by CensusWide for Menlo Security, a specialist in cloud security.
Due to the rise in convincing fake ads created by AI tools like ChatGPT and image generators like Midjourney and DALLE, Menlo Security is warning of an increase in “malvertising,” a highly evasive threat in which malware is embedded in online or social media advertisements. Additionally, the study demonstrates that many people are unaware of the dangers associated with clicking on bogus, and as a result, potentially malicious, advertisements.
Despite an increase in impersonated brands like Microsoft and Google, the vast majority of respondents (70%) are unaware that clicking on a brand logo can infect them with malware. 48% and 40% of respondents are unaware that clicking on pop-ups and banners can infect them with malware, respectively. In contrast, almost three quarters (73%) are aware that links in emails can contain malware.
According to the study, 70% of consumers “to some extent” click on online advertisements; despite the fact that ads generated by AI make it harder to tell if they are malicious. People may unintentionally download malware onto their devices as they visit sites with infected advertisements. Menlo Security warns that this number could rise as more AI tools and software become accessible and simple to use. On average, one out of every 100 online advertisements is malicious.
A little over a third of the people who responded, or 31%, did not feel confident in their ability to recognize and avoid malicious threats. This goes up to 41% of people over 55 and 40% of women.
The nature of the website affects how much trust customers have in it. One in five people believe that social networking sites like Facebook and Instagram do not engage in fraudulent advertising, whereas only 14% of people believe that Twitter does not engage in fraudulent advertising. Sites like Amazon (28%) and Google (25%) earn a little more trust.
Menlo Security’s AI security spokesperson, Tom McVey, stated: Malvertising and other highly evasive threats will only grow as AI-generated content becomes more common online. When used maliciously, AI can not only produce convincing text but also images that can be made to look like well-known brands or logos. According to our research, malware can be downloaded online in as little as three to seven clicks.
Cybercriminals can inject their malware onto the victim’s device by clicking on a fake link, typically for financial gain. We anticipate a significant increase in malvertising as a result of the easy accessibility of malware-as-a-service and AI-generated text and images, which allow even novice attackers to create convincing advertisements.
“The study found that only 32% would not trust any website to not contain malvertising; however, awareness of the risks needs to increase so that anyone online, regardless of how much they trust the website, avoids clicking on advertisements. For instance, we discovered that Microsoft, Facebook, and Amazon were the top three brands that were impersonated by malicious threat actors in order to steal personal and confidential data over the past ninety days. It may shock some individuals to learn that even the most trustworthy websites are susceptible to malvertising.