Google: ‘Generative AI allows for more sophisticated cyber attacks’

Google: ‘Generative AI allows for more sophisticated cyber attacks’
Google: ‘Generative AI allows for more sophisticated cyber attacks’
--

In a research report, Google warns of the impact generative AI will have on cyber attacks. 2024 already promises to be a challenging year for cybersecurity.

Generative AI has many useful applications, but the technology is not without risks. In the Cybersecurity Forecast 2024, Google’s security experts look ahead to what awaits us. Unfortunately, the report does not make us look to 2024 with optimism. Google ranks generative AI as one of the largest emerging security risks.

There have been concerns from the cybersecurity world for some time about the widespread availability of generative AI tools, but according to Google, they will only be used on a large scale by cyber criminals from next year. Generative AI can serve them in several ways. First and foremost, artificial tools help to compose a grammatically correct email. Until now, language errors were one of the first indicators to distinguish a fake from a real email, but that is no longer the case.

But with the help of generative AI, attackers can also take a more targeted approach. For this they need no more than the information from your publicly visible social media profiles, which they then feed to the AI. If fraudsters live and work where you live, they can address you much more personally. Tools like ChatGPT may have built-in controls to prevent illegal use, but asking the chatbot to help with a quote or a standard email will rarely set off alarm bells.

Brittle trust

Google also fears that AI-generated content will increasingly find its way into public information. With generative AI you can create a fake news report and footage by cleverly playing with prompts. This makes gullible people even more susceptible to misinformation, or can have the opposite effect and make people wary of everything they see, hear and read. That fear is not fiction at all: Adobe is under fire for offering fake images about the war in Gaza in its image bank, which have already been used by news media.

With currently available tools, attackers can already employ a lot, but Google expects that they will also develop and offer AI applications themselves on the dark web. The barrier to access generative AI for bad purposes will thus become increasingly lower. Just as we have ‘ransomware-as-a-service’ today, in 2024 ‘LLM-as-a-service’ will emerge in the cybercrime environment.

AI as defender

Fortunately, it’s not all negative. Generative AI may develop into a dangerous weapon, but the technology can also benefit defenders. A key application of AI is to synthesize large amounts of data and contextualize it into threat intelligence to deliver actionable detections or other analytics.

AI can thus provide a solution to the shortage of people in the industry. We asked what other experts think about this during our recent roundtable discussion.

read also

NIS2: good idea, but with which people?

Cyberwar in space

2024 already promises to be an exciting cyber year. Google also expects the necessary geopolitical digital friction. In addition to the military conflicts in Ukraine and Gaza, the many elections in Western countries and even the Olympic Games will also provide breeding ground for state-sponsored attacks and hacktivism.

Google points to the ‘big four’: China, Russia, North Korea and Iran. Cyber ​​attacks do not have to be limited to the face of the earth. Google believes that space infrastructure such as satellites will become an increasingly important target for such attacks.

The article is in Dutch

Tags: Google Generative sophisticated cyber attacks

-

PREV Trade union actions at Colruyt: distribution center is at a standstill
NEXT De Lijn is asking to temporarily reduce its supply until capacity problems are resolved