AI in the energy sector guidance consultation

Closes 7 Feb 2025

Sector specific examples

6.1 The following case studies are targeted at supporting adoption of AI within the energy sector, in line with this guidance. Appropriate actions are highlighted but these should not be considered exhaustive. 

AI in consumer interactions

6.2 AI can be used in a range of ways to support service agent interactions with a customer. For example, it could provide case history summary including key points of previous interactions and therefore reduce the time to enter the context of a case. With increased consumer insight AI could also assist in drawing the attention of the customer interaction agent to the key issues, expectations, key policy documentation, and processes. 

6.3 However, if ill-conceived or implemented poorly, this could result in inaccurate or irrelevant information being used to inform the customer and the potential for them to be treated unfairly. The transparent communication of appropriate information to consumers is relevant, as well as the ability of the system, human, technology or both, to interpret the relevant information to the consumer.  

6.4 Consideration should be given to: 

a. the role of governance in developing, implementing and overseeing the effectiveness of AI 

b. undertaking a risk assessment which identifies control measures needed through the life cycle to ensure the AI system functions as defined and results in the consumer being treated fairly  

c. ensuring the system containing AI is designed to take account of the complexities of the AI and its interactions with humans

d. implementation of the identified control measures 

e. implementation of mitigation measures including access to any necessary redress should consumers be treated unfairly 

f. testing and monitoring the effectiveness of the AI model, the implemented controls, and mitigation measures

g. training agents to help identify any incidents where consumers may be being treated unfairly  

h. any other measures necessary to reduce the risk and ensure consumers are treated fairly

AI used to identify and assist excluded consumers

6.5 AI could help to identify excluded consumers and highlight relevant information for service decision making, such as during engineering works, or for the priority service registers. In addition, AI could be used to identify and adapt processes to maximise the potential for consumers to be treated fairly, and if necessary, highlighting procedures for redress. For example, this could apply to consumers in vulnerable circumstances or consumers who are digitally excluded. 

6.6 In this scenario, identifying where consumers may be treated unfairly may be difficult. For example, consumers may be digitally excluded and therefore poorly represented in digital data used, for example, in AI training sets. Care should be taken to ensure that any intervention does not reinforce any existing bias and that consumers' data is protected according to existing legal requirements.  

6.7 Consideration should be given to: 

a. the role of governance in developing, implementing, and overseeing the application of AI to ensure it is effective and not reinforcing bias

b. undertaking a risk assessment which identifies areas where the existing system may fail to support excluded consumers and identify measures that results in the consumer being treated fairly 

c. implementation of the identified measures

d. use of scenario planning and trials with robust evaluation measures to ensure arrangements are effective and not reinforcing bias 

e. alternative and diverse means of reaching excluded consumers including collaboration with other organisations to maximise transparency 

f. implementation of mitigation measures including access to any necessary redress should consumers be treated unfairly

g. ensuring robust arrangements are in place to establish and maintain data privacy and compliance with the relevant regulation 

h. testing and monitoring the effectiveness of the AI model, the implemented controls, and mitigation measures

i. any other measures necessary to reduce the risk and ensure consumers are treated fairly

AI in predictions and forecasting

6.8 AI can be used in creating predictions and has been notably used in forecasts, where the models can be used to complement existing models and improve predictions. Within the energy sector, examples include: 

a. weather forecasting, such as renewable generation considering granular weather data and satellite imagery, or to predict storm damage to networks for use in planning responses to extreme weather events

b. predicting time to failure of equipment and expected maintenance requirements

c. predicting electricity usage at different granularities and on different time scales, supporting operations and planning

6.9 These predictions can provide useful insight into what is going to happen in the future. In a similar way to existing methods of prediction, AI also has its limitations around accuracy and uncertainty. However, AI can be used in conjunction with existing methods to manage these limitations. 

6.10 Specific consideration should be given to:

a. the role of governance in developing, implementing and overseeing the effectiveness of AI 

b. undertaking a risk assessment which identifies control measures needed through the life cycle to ensure the AI system functions as defined and results in usable predictions  

c. identifying the likelihood and consequences of any potential failure modes and maloperation and identify any controls or mitigations necessary  

d. implementation of the identified control measures  

e. understanding of required accuracy, biases, and the available data to support the creation of the necessary models 

f. appropriateness of the AI models in comparison with traditional models, and using a combination of multiple models and methods for prediction: for example, situational use of AI models for specific conditions, or in combination with traditional models 

AI in cyber-physical systems 

6.11 AI can be integrated into the control of physical systems, and provide opportunities to mitigate risk, or reduce operational costs. Examples include: 

a. using robots and autonomous systems in hazardous environments, such as drones for inspections reducing health and safety risk to individuals

b. AI automation controlling devices, such as battery storage, aligning generation and demand forecasts

c. AI in a cyber security context being used by network operators to monitor for potential cyber-attacks, launch risk and vulnerability assessments, and deploy automated defences. This includes situational awareness and triggering actions such as energy dispatch to mitigate risks and maintain system stability

6.12 If implemented poorly, these applications can result in harm to the systems they are supporting, the automation leading to maloperation that could damage both the device itself, and the systems it is connected to or operating with. This could include inappropriate shutdowns, with potentially a cascading effect, or autonomous drones colliding with and damaging infrastructure. 

6.13 Consideration should be given to:

a. the role of governance in developing, implementing and overseeing the effectiveness of AI

b. undertaking a risk assessment which identifies control measures needed through the life cycle to ensure the AI system functions as defined and that in its operation, outcomes are as required

c. implementation of the identified control measures, for example using frameworks such as functional safety. This may include wraparound systems and guardrails, intended to ensure the system remains in a safe state

d. appropriate consideration of the potential impact of failure on a broader system to ensure any consequential impact of the failure is mitigated

e. ensuring the system containing AI is designed to take account of the complexities of the AI its interactions with humans

f. testing and monitoring the effectiveness of the AI model, the implemented controls, and mitigation measures

g. appropriate training of operators and overseers to identify incidents where the system is entering maloperation, and how to intervene

h. identification and implementation of any other measures necessary to reduce the risk

AI in pricing and trading

6.14 AI enables energy traders to make informed and profitable trading decisions by detecting market opportunities and risks. AI can analyse complex market dynamics in energy trading by processing real-time data on pricing, demand and supply trends. AI can also assist in conducting risk management, by proactively assessing market volatility and uncertainty. Energy portfolios can be optimised by simulating market scenarios, analysing sentiment, automating tasks and continually adapting to changing market conditions. 

6.15 The use of AI in pricing and trading has the potential to adversely impact competition. Where AI is used to is to fix bids or prices or margins or to exchange commercially sensitive information between competitors, particularly pricing information, it could breach competition law. The use of AI, in certain circumstances, might also amount to the abuse of a dominant market position in breach of competition law. Further information on competition law compliance obligations is set out in Appendix 1. 

6.16 Therefore, it is important to consider:

a. the role of governance in developing, implementing, and overseeing the application of AI to ensure the system containing it is not operating in an anti-competitive manner

b. undertaking a risk assessment which identifies areas where the system may fail and identify measures that results in fair market outcomes

c. implementation of the identified measures

d. having appropriate oversight, monitoring and audit trail in place confirming that the AI system has been checked for compliance with competition law

Use of black boxes

6.17 The terminology black box is often used to describe systems where it is not possible to understand and quantify how the system generates its output

6.18 AI, as with other forms of technology during their development, can be difficult to explain or understand. Such technologies can provide powerful capabilities, such as seen with large language models (LLMs), and can be successfully adopted with the appropriate approach. However, they often have associated with them higher levels of uncertainty, which needs to be accounted for when using such technology.

6.19 When using black box technology, the levels of increased uncertainty require users to mitigate the additional risk this creates. In addition, maloperation can occur, and be indistinguishable from normal operation without additional verification and validation, such as hallucinations in LLMs. 

6.20 Considerations should include:

a. the role of governance in developing, implementing and overseeing the effectiveness of AI

b. undertaking a risk assessment which identifies control measures needed through the life cycle to ensure the AI system functions as defined and that in its operation, outcomes are as required

c. implementation of the identified control measures

d. testability: testing can be used to increase the understanding of the characteristics of the AI system, including empirical assessments of bias and accuracy. Sufficient assurance of the system performance should be combined with appropriate control measures 

e. monitoring and corroboration: it may be necessary to validate output against known good data, research or expertise

f. explainability: organisations have a duty to provide the appropriate level of explanation of the AI that it uses

g. risk: the residual uncertainty and its potential impact after mitigations must be considered, and the organisation must decide as to whether it can tolerate the risk, and act accordingly