Intelligent CXO Issue 40 | Page 22

EDITOR ’ S QUESTION

Cybercriminals have never been better equipped to cause mass disruption and damage to organisations . Generative AI hit the mainstream in November 2022 with OpenAI ’ s ChatGPT . At the time of its launch , it was considered a relatively benign tool . The greatest concern was students cutting corners in essay writing . However , leading technology experts and governments across the world are actively working on regulation and legislation for the technology owing to fears it could weaponise disinformation , discrimination and impersonation .

This move towards weaponisation is something we have already seen . As early as a month after the platform was widely available , our researchers identified Large Language Models ( LLMs ) being used to lower the bar for code generation , helping unskilled threat actors effortlessly launch cyberattacks . In some cases , ChatGPT was creating a full infection flow , from spearphishing to running a reverse shell . It is only a matter of time before we see automated malware campaigns launched quicker than human beings are capable of .
THE SAME SPEED AND AUTOMATION FUELLING THESE ATTACKS CAN BE USED TO BOLSTER OUR DEFENCES .
Obstacles continue to come up as the fight to protect critical services from advancing AI-generated threats develops each day . Attacks on UK ’ s critical infrastructure bring cybersecurity to the forefront of conversation – which of course is a good thing – but also highlights that when it comes to protecting our public services , there is an urgent need for more robust security . Public awareness and understanding of AI are growing , but with that comes questioning around the solution of the anticipated global risk of AI , which is becoming increasingly difficult to answer .
In practical terms , it means fighting fire with fire – specifically , leveraging the technology that can cause destruction for defensive action to fortify IT infrastructure and bolster the cybersecurity team ’ s capabilities . The same speed and automation fuelling these attacks can be used to bolster our defences . Something else that is top of mind for cybersecurity professionals is how to protect their AI assets and the associated data lakes . AI poisoning or unintended data sharing is very much an area of concern , so building the right controls around this will enable security teams to proactively identify vulnerabilities and weaknesses in systems , applications and networks before they can be exploited .
Generative AI represents a dual-edged sword in the world of cybersecurity . For the attackers , it ’ s an accelerant for criminal activities , while for defenders it could help stamp out those rapidly growing fires . For example , by using it to generate realistic , synthetic data that mirrors real-world cyberthreats we can augment existing threat intelligence feeds , providing cybersecurity professionals with a broader and more diverse set of data to analyse . By improving our understanding of emerging threats and countermeasures , we can stay ahead of potential attackers .

MUHAMMAD YAHYA

PATEL , LEAD SECURITY ENGINEER , CHECK POINT SOFTWARE

22 www . intelligentcxo . com