in

UK cyber chief: “AI should be developed with security at its core”

NCSC CEO, Lindy Cameron’s speech emphasised the importance of building security into AI technologies from the outset.

SECURITY must be the primary consideration for developers of artificial intelligence (AI) in order to prevent designing systems that are vulnerable to attack, the head of the UK’s cyber security agency (NCSC) has today warned.

In a major speech, Lindy Cameron highlighted the importance of security being baked into AI systems as they are developed and not as an afterthought. She also emphasised the actions that need to be taken by developers to protect individuals, businesses, and the wider economy from inadequately secure products.

Her comments were delivered to an audience at the influential Chatham House Cyber 2023 conference, which sees leading experts gather to discuss the role of cyber security in the global economy and the collaboration required to deliver an open and secure internet.

She said:

“We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology.

“Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset. This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use.

“We know, from experience, that security can often be a secondary consideration when the pace of development is high.

“AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.”

The UK is a global leader in AI and has an AI sector that contributes £3.7 billion to the economy and employs 50,000 people. It will host the first ever summit on global AI Safety later this year to drive targeted, rapid, international action to develop the international guardrails needed for safe and responsible development of AI.

Reflecting on the National Cyber Security Centre’s role in helping to secure advancements in AI, she highlighted three key themes that her organisation is focused on. The first of these is to support organisations to understand the associated threats and how to mitigate against them. She said:

“It’s vital that people and organisations using these technologies understand the cyber security risks – many of which are novel.

“For example, machine learning creates an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.

“And LLMs pose entirely different challenges. For example – an organisation’s intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”

The second key theme Ms Cameron discussed was the need to maximise the benefits of AI to the cyber defence community. On the third, she emphasised the importance of understanding how our adversaries – whether they are hostile states or cyber criminals – are using AI and how they can be disrupted. She said:

“We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.

“LLMs also present a significant opportunity for states and cyber criminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.”

What do you think?

37 points
Upvote Downvote

Written by C.L Martin

One Comment

Leave a Reply
  1. Awesome blog you have here but I was curious if you knew of any community forums that cover the same topics discussed here? I’d really like to be a part of group where I can get feedback from other experienced people that share the same interest. If you have any suggestions, please let me know. Appreciate it!

Leave a Reply

Your email address will not be published. Required fields are marked *

CyberArk Survey Surfaces Identity Security Challenges

Trial Set for ‘Darknet’ Drugs Defendants