AI in the energy sector guidance consultation

Closes 7 Feb 2025

Appendix 5: AI and cyber security

The use of AI and its impact on cyber security 

5.1 The application of AI has the potential to improve resilience to cyber attack. For example, intruder detection by monitoring for subtle changes in system performance. However, the uncontrolled, unauthorised and ungoverned risk of any technology, particularly emerging technologies, can cause exposure to an unknown number of existing and emerging cyber security threats and vulnerabilities. The rapid proliferation of AI technologies, the ease of availability of AI services particularly Generative AI, the widespread interest in their use, and that AI has such a broad array of uses may lead to AI being used without knowledge or authorisation. This rapid proliferation of emerging technology is not a new risk and has been observed before when, for example, internet access and cloud computing were made commonly available at the office desktop.

5.2 To maintain effective cyber security hygiene, stakeholders are expected to ensure AI is used in a controlled manner. A key element for robust cyber security hygiene is that the stakeholder is always aware what technology is being used, for what purpose and where it is being used, and that it is being used in a controlled manner by appropriately authorised staff and systems.

5.3 AI technology is a continuation of development of broader digital technologies including hardware, software and data, each of which have associated guidance and good practice expectations. This good practice is expected to be applied with the same consideration and rigour, based on applied risk analysis and assurance methodologies. This includes guidance provided by:

a. NCSC Secure Design Principles

b. NCSC Secure Development and Deployment

c. NCSC Cloud Security Principles

d. ICO GDPR Guidance

e. Ofgem Data Best Practice Guidance

5.4 In addition to established guidance and good practice, it is necessary to consider the novel and exacerbated security risks that are AI-specific due to the unique behaviours and characteristics of AI based systems and processes. This good practice is expected to be applied in a manner appropriate to the role of the stakeholder within the energy sector, the nature and criticality of the systems and processes where the technology is being used, and the risk and threat assessment and other non-functional needs of the use case being developed. 

5.5 NCSC continue to assess cyber security impact of AI and captured this in a case study and specific machine learning guidance and AI guidance. In addition, the ICO has published guidance on how AI systems can exacerbate known security risks and make them more difficult to manage

5.6 The suitability of each stakeholder's established cyber risk management approach to identify, assess, mitigate and manage the threats and risks to their organisation and customers associated with the adoption and use of AI based technologies are expected to be assessed. 

5.7 To manage cyber security risks associated with the use of AI, stakeholders are also expected to consider: 

a. AI’s impact on existing cyber security, risk management and incident response arrangements 

b. the need for any additional controls to prevent the unauthorised or uncontrolled use of AI, for example, preventing the use of shadow AI through additional education and technical controls

c. the effectiveness of governance and accountability arrangements (see Governance and policies)

d. competency within the stakeholder’s organisation (see Competencies) 

e. supply chain management (see Appendix 3) 

5.8 Regarding the AI supply chain, stakeholders are expected to apply appropriate mitigations based on the identified and assessed risk, including potential emerging threats and vulnerabilities, and assessed impact on services and assets, such as recommended in the following guidance: 

a. NCSC Basic risk assessment and management method

b. NCSC Cloud security principles

c. NCSC Supply chain security principles

d. NCSC Machine learning principles: securing your supply chain

Manage the threats from offensive AI 

5.9 There is evidence that the emergence of AI is increasing the volume, sophistication and effectiveness of existing cyber attack tactics. This can also change the threat landscape with new AI-based attacks, new vulnerabilities and opportunities for cyber threat actors.  

5.10 NCSC has analysed the near-term impact of AI on the cyber threat landscape and provided an assessment. The assessment provides an overview of AI’s potential uses as a tool for cyber attack. 

5.11 Stakeholders are expected to update their threat intelligence gathering to include AI and consider increasing the frequency of review and breadth of intelligence sources. This will enable an organisation’s cyber risk assessment and associated controls to take account of AI-related threats.  

5.12 It will also be necessary for stakeholders to ensure that cyber security management and technical team’s skills, knowledge and capability are kept up to date with the latest developments in AI and associated threat intelligence. 

5.13 NCSC offers guidance in intelligent security tools for stakeholders considering including defensive AI-based security tools in security posture. The impact of the loss of internal security knowledge and skills from the introduction of such tools are expected to be considered.