A new arms race?

Author: Ben Addley, Managing Partner and Head of Research, Heligan Strategic Insights  

On the 10th June, I attended the 15th annual UCL International Crime Sciences Conference in London. Hosted at the magnificent 1930’s art deco Wellcome Collection building opposite Euston Station, around 200 data scientists, security practitioners, policymakers, technologists and academics, filed in, collected their free coffee and got ready for a day focused on conference headline of “Fraud, Fakery, and Deception”.

Fraud is now arguably the world’s most prevalent crime type. According to the UK National Crime Agency, it accounts for over 40% of crime in England and Wales. According to the Office of National Statistics, it was the most common crime type between April 2022 and March 2023, with an estimated 3.5 million incidents of fraud experienced by adults aged 16 and over. With the advent of new technologies such as AI, fraudulent activity too has evolved. Criminal actors and organisations are utilising the new digital landscape, including deep-fake technologies, to commit new types of fraud, to spread misinformation and to commit existing crimes in ever more harmful ways. Election fraud, spear phishing, spoofed websites, DDoS attacks, data theft, impersonation scams, the list of tech-enabled fraud is varied and constantly changing. 

With multiple streams of talks, I focused on emerging threats, law enforcement and how AI is shifting the sands of what is possible today and what might happen tomorrow from the perspective of policing the threats and how the criminal and hostile state actor exploit vulnerabilities. Some major themes emerged from this intersection of research and debate that certainly helped me crystalise my own thinking on how society is impacted by AI for good and bad.

There was a clear consensus that some form of governance over AI was needed and that efforts by the EU and the Bletchley accord signed in November 2023 were all good starts, but not necessarily the final answer. How can we for instance prosecute deepfake crime under existing laws? Are new offences needed? Still much thinking to be done due to the globalised digital nature of the crime enabled by AI. Some felt this needed to be delegated from state to private actors such as businesses. 

One horrific theme that unfortunately appears to be driving all forms of AI advancements in deception and fakery is offender misuse of emerging tech to facilitate crimes against children online. According to DI Andy March of the National Police Chiefs Council, AI is crossing over to enable CGI which is getting better and more photo realistic. LLM’s are enabling chatbots to better simulate the conversations of children and text to speech systems driven by LLM’s and voice deepfake technologies are enabling adults to pretend to be children to groom more effectively online. 

For me though the one theme that stood out loud and clear over all others was the apparent arms race we are now in with AI. To be clear, this is not human versus machine, rather this is machine versus machine. We may have already lost the battle in terms of being able to outwit AI technology, the speed of advancement and the scale of new capability being developed at a rate never seen before would tend to support that rather dystopian view. A key question we must ask is how do we stop AI being used by criminals and hostile state actors for nefarious and illegal means? One answer is to build better ‘good AI’ to combat the ’bad AI’. Simplistic and naïve as this may sound, there is unfortunately historical precedence. For over 40 years after World War II, the world engaged in a dangerous game of brinkmanship, developing ever more sophisticated weapons systems and delivery mechanisms for those weapons, to ultimately create an environment of mutually assured destruction if ever some idiot was actually stupid enough to press the big red button! 

By entering into a similar arms race with AI as the principal weapon, we may be able to stave off the rather unsettling thought of criminal use of AI gaining the upper hand. 

Unfortunately, and as is usually the case, new technology is always adopted and adapted at a faster rate by those who wish to use it for ill. The conference presented multiple examples of how criminals and state sponsored hacking groups are using misinformation, disinformation, deepfake imagery and audio to manipulate victims and create new vectors to commit old forms of crime such as fraud. 

So, ultimately the conclusion is the ‘good guys’ need to be better at outwitting the ‘bad guys’ and get faster at using the same tech against them. So, is this a new arms race? Not yet…but I fear it may be coming.