UK government launches voluntary AI security framework as cyber attacks surge

An image of a lion jumping through a flaming hoop held by a ringmaster

The British government's new Code of Practice for Cyber Security of AI provides voluntary guidelines on secure design, development, and transparency to help organizations safely leverage AI technology.

Table of Contents

Earlier this year, the British government unveiled its long-awaited Code of Practice for the Cyber Security of AI. As AI becomes an increasingly integral part of day-to-day life, the Code seeks to provide stakeholders with guidance to minimize the risks of the technology. This post will examine:

  • The background behind the Code.
  • The key themes of the Code.
  • The next steps.
  • The implications for creators.

Background

The Code of Practice is one of several AI-related initiatives that the British government has unveiled in recent months. The current Labour administration sees this technology as a vital tool for driving growth and expanding the UK’s economy. In the words of the Prime Minister, Sir Keir Starmer, AI “means more jobs and investment in the UK, more money in people’s pockets, and transformed public services.” 

But if AI is going to be rolled out across a host of different applications, it needs to be as secure as possible lest it become a digital Achilles heel. Consequently, the British government hopes the Code will “give businesses and public services the confidence they need to harness AI’s transformative potential safely.”

At the same time, cybersecurity is becoming more and more of an issue. Over half of British companies have reported experiencing at least one cyberattack over the past five years, and dealing with the aftermath has cost them £44 billion (or roughly $55 billion). It’s little wonder that the government is keen to shore up cybersecurity. 

The key themes of the Code

The Code sets out 13 different principles, but they fall into three broad themes:

  • Secure design: Organizations must conduct thorough risk assessments specific to their AI systems, considering both traditional cyber threats and AI-specific vulnerabilities.
  • Secure development: Security considerations should be embedded throughout the AI development lifecycle rather than added as an afterthought. Developers should also ensure that the entire AI supply chain, from data acquisition to model deployment, is secured. There also needs to be extensive testing during the development process, and it should continue after deployment in order to address emergent threats. 
  • Transparency and accountability: Documentation is key. Technical matters such as data, models, and prompts must be recorded, and end-users should be given the information necessary to properly use, manage, and configure the AI.

What’s next?

It’s worth noting that the Code is strictly voluntary. It’s not a law, and no one will be obliged to follow its provisions. While this approach offers flexibility since it means the Code can be updated much faster than if it were enshrined in legislation, it won’t be worth much if no one bothers to follow it. 

But while the Code may not be legally binding, the British government hopes that it will have enough clout to be taken seriously. It also intends to submit the Code to the European Telecommunications Standards Institute with a view to influencing global standards on AI. If the ETSI were to adopt the principles of the Code, it would strongly incentivize organizations to follow these rules. 

What does this mean for content creators?

Even if you aren’t an AI developer, the Code still matters. At the moment, the AI sector resembles the Wild West. There’s little in the way of standardization, and unless you’re well-versed in the intricacies of AI, it can be hard to assess the legitimacy of a given tool. But if the British Code ends up becoming the ‘gold standard,’ it could help creators identify high-quality AIs. And of course a greater emphasis on security also benefits end-users by lessening their exposure to flaws and vulnerabilities that they might be unaware of.

Guidelines, not mandates

The Code of Practice or the Cyber Security of AI that has been unveiled by the British government represents a significant development in the regulation of AI. Its emphasis on secure design, secure development, and transparency encourages developers to implement robust testing protocols and extensive documentation with a view to promoting cybersecurity. While the Code is voluntary, developers could be strongly incentivized to follow its principles if it ends up shaping global cybersecurity standards. 

Illustration of colorful books on a shelf against a dark background.