Intelligent CXO Issue 40 | Page 20

EDITOR ’ S QUESTION
Maurice Uenuma , VP & GM Americas , Blancco

AI has brought with it new sophisticated cyberattacks . In this month ’ s Editor ’ s Question , four experts outline effective cybersecurity approaches with regards to AI , starting below with Maurice Uenuma , VP & GM Americas at Blancco .

AI is a transformative technology that , while still nascent , already shows great potential to enable both hackers and cybersecurity professionals alike . Attackers will benefit from more realistic social engineering schemes , the ability to identify exposed vulnerabilities more quickly and develop new exploits more efficiently . At the same time , defenders will be able to leverage AI-enabled security platforms to more rapidly and accurately detect attacks underway , identify and mitigate vulnerabilities , develop and deploy patches more quickly and so forth . AI will be an integral part of cybersecurity going forward , and CXOs will need to have a working knowledge of both .
SENSITIVE INFORMATION SHARED AS INPUTS TO GENAI PLATFORMS COULD ULTIMATELY EXPOSE THIS DATA TO THE PUBLIC .
GenAI tools will be leveraged for a broad range of personal and business uses , so we must build security and privacy controls into these systems at the outset , while encouraging – and enforcing , when necessary – their responsible use . Take GenAI platforms ( like ChatGPT ) as a good example . Without sufficient guardrails , the use of GenAI by employees can pose a big risk to an organisation and the employees themselves . Yes , these are powerful tools that can boost both creativity and productivity at work , but by using them , employees may accidentally ( or even intentionally , in some cases ) sidestep important security controls and safeguards . Sensitive information shared as inputs to GenAI platforms could ultimately expose this data to the public , and there ’ s even the possibility of GenAI to piece together ‘ clues ’ that generate accurate corporate data , or Personally Identifiable Information ( PII ), which should be protected under current regulation .
Training employees on the risk of unintended data exposure through public GenAI platforms is crucial . Organisations also need to be sure to update and create internal policies around what can and cannot be shared as a prompt in a public GenAI tool , as well as policies around what data is being stored or regularly erased . A disciplined approach to the data that employees collect , including the importance of regular data sanitisation to remove unnecessary ‘ ROT ’ data ( redundant , obsolete and trivial ), will help to significantly reduce an organisation ’ s data attack surface in the age of AI . These internal policies are particularly important given the rapid rate of change that regulators will struggle to keep up with . The EU AI Act is certainly a positive step in the right direction , but organisations need to pay close attention to new and evolving standards to ensure their AI practices are compliant .
This technology is rightly being embraced across businesses to enhance operational efficiency and improve productivity , and in turn business outcomes . Yet in terms of future resiliencebuilding , now is the time to carefully consider how GenAI could go wrong and identify ways to mitigate risks in its design , deployment and use . Given the general uncertainty of future risks associated with a new , transformative technology , we must approach security strategy with an emphasis on resilience-building : ensuring that critical systems can continue to operate as intended even when degraded or compromised ( for any reason ). Our team at Blancco is therefore approaching AI with these considerations in mind and continue to emphasise data management best practice as an essential part of staying secure .

HOW ARE YOU APPROACHING CYBERSECURITY WITH REGARDS TO AI ?

20 www . intelligentcxo . com