AI in the energy sector guidance consultation

Closes 7 Feb 2025

Risk

Expectation 

4.1 Stakeholders evaluate the risks associated with the use of AI in the energy sector to help them effectively identify and implement measures necessary to manage those risks. 

Description 

4.2 Our regulatory approach generally focuses on outcomes in managing risk rather than setting prescriptive rules on the application of AI. The aim is that this approach allows stakeholders to have flexibility in how they deliver desired outcomes and manage the risk while empowering stakeholders to innovate. The following risk framework is intended to assist stakeholders in understanding the energy regulator’s view on risk and proportionality. The alternative to outcome-based regulation is a prescriptive approach, regulating through the application of rules. These rules would need to account for all eventualities which would be very difficult to implement effectively for rapidly developing technologies such as AI. 

4.3 The use of AI can present novel risks including the risk of bias in training data, model inaccuracy and accounting for shifts in model due to adaptation or changes in the application environment.  

4.4 From the outset it is important to consider the use of AI as a component within a larger application system. For example, this may be an AI component within a wider engineered and, or technical system or persons overseeing the use of a large language model. This is important because the technical and, or human systems that surround the AI component plays an important part in managing the risk associated with the use of AI, including any uncertainty associated with the AI component and its operating environment. 

4.5 The nascent nature of AI is resulting in several principles-based approaches to the management of the risks associated with the use of AI. These approaches are likely to mature over time. We have reviewed a number of these frameworks, including: 

a. National risk register 2023 on GOV.UK 

b. Considerations for developing artificial intelligence systems in nuclear applications, Office for Nuclear Regulation 

c. Assurance of machine learning for use in autonomous systems (AMLAS) tool v1 user guide, University of York 

d. ISO42001 AI management systems 

e. AI Risk Management Framework, National Institute of Standards and Technology, US Department of Commerce 

f. Microsoft Responsible AI Standard v2, General Requirements 

g. The AI Safety Institute 

h. Department for Science, Innovation and Technology’s (DSIT) Call for views on Cyber Security of AI 

4.6 Potential users of AI in the energy sector are expected to consider the application of these framework practices in a proportionate manner in line with the safety, security, fairness and sustainability risks associated with its application. 

Good practice

Practice 1: have a clear strategy  

4.7 Stakeholders considering using AI should, as a matter of good practice, clearly articulate the benefits of AI, its use and any associated risk compared with alternative or traditional technologies. As part of this, a clear plan for the entire AI project life cycle, from planning, development, deployment to monitoring, auditing and decommissioned is expected to be developed. This approach will facilitate adaptability to new regulation. 

Practice 2: risk assessment and management 

4.8 Use of AI should be accompanied by proportionate management of risk established through an effective evidence-led risk assessment. The purpose of the risk assessment is to guide stakeholders considering using AI towards proportionate actions and ensure the risks of failure are avoided or mitigated or both. Risk is commonly expressed as the combination of the probability of something adverse happening and the consequence of the adverse event. Depending on level of risk associated with the use of AI it may be beneficial for potential users of the technology to use risk matrix frameworks, for example Machine Learning Principles, on the NCSC website, and keep a record of the assessment to aid future use.  Risk areas might include, but not be limited to, operational, legal and reputational risks. 

Practice 3: adopt good practice in specification and development  

4.9 In a similar manner to conventional software, robust AI components and wider systems intended to protect against any uncertainty associated with the use of AI, should be developed using established good practice over the life cycle of the systems (that is, from concept to system end of operational life). This includes clear requirements (for example, input specification, output requirements, assumptions) and good practice in software development, data governance and management, and AI component training (including AI ethics for responsible AI use). 

Practice 4: understand the characteristics of the AI component within the broader system 

4.10 The system containing AI should be tested in a proportionate manner to develop confidence in the performance characteristics of the AI and the surrounding system (human or physical). Given the difficulty determining the reliability of any AI component it is likely that the broader system, not just the AI component, will make a large contribution towards the overall reliability. The AI component and its operational environment will likely change with time due to:

a. ageing

b. environmental changes

c. evolution of organisational culture, for example perceived trust in the AI component

d. system changes, for example changes in signal sampling (known as quantisation) and timing changes   

Practice 5: identify and address potential failure modes that could impact safety, security or fairness 

4.11 Users of AI are expected to assess potential failure modes and maloperation in a proportionate way to make sure that the broader systems can control and mitigate the consequences of potential failures and maloperation. The assessment of failure modes and maloperation may very well drive the consideration of engineering protection (functional safety) or human intervention. This assessment is expected to consider an understanding of any unintended consequences, such as loss of skill base due to the introduction of AI or overreporting of positive outcomes. Monitoring can also be used to help identify any drift in behaviour of the AI system. This could be via the use of independent systems such as diversity in AI components, the use of digital twin comparators, or conventional systems (for example, functional safety). Arrangements for mitigating the consequences of AI component failure and recovery should be implemented to support, if necessary, the overall recovery of operations.  

Practice 6: develop confidence in the performance of the AI component within the broader system 

4.12 Rigorous testing is likely to be essential in building confidence in the use of AI prior to its application. However, given the complexity and uncertainty associated with AI systems, and the complexity of the AI components, determining the overall level of uncertainty of an AI component may be difficult, and potentially impossible, and therefore arrangements are needed to make sure the output of AI components is used appropriately. It may be beneficial to develop metrics to evaluate the performance of the AI component, and the suitability of the controls and mitigation of the broader system or systems through monitoring and evaluation.  

Practice 7: access to competent persons 

4.13 Ensuring the necessary skills and experience needed to deploy AI effectively and safely is available and developed. This includes:  

a. operational application knowledge including understanding the consequences of failure and maloperation

b. behaviour and culture to ensure the deployment of the system or systems containing AI is done in such a way as to reduce any associated risk  

4.14 It is also important stakeholders provide comprehensive training to staff generally, depending on their role, to ensure they understand AI technologies, their use and impact on their activities. 

Practice 8: human and AI interaction 

4.15 It is important to take account of the complexity of the interaction between humans and the systems containing AI. As such, human oversight from the earliest stage is recommended as a risk control and mitigation measure. It is important to consider both overtrust and undertrust in the AI components especially where human oversight is used as a risk control and mitigation measure. 

Practice 9: monitor and review 

4.16 Arrangements need to be reviewed on a regular basis and in a proportionate manner to ensure they remain effective. Again, metrics may be helpful to monitor system performance.