AI in the energy sector guidance consultation
Governance and policies
Expectation
3.1 Stakeholders have appropriate policies, processes and procedures in place to ensure effective governance and oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
Description
3.2 Strategies and organisational arrangements for the safe, secure, fair and sustainable use of AI are expected to be driven at board level, and senior management level, and take account of ethical standards.
Good practice
Practice 1: clear strategy, with articulation of outcomes and associated risks
3.3 At the outset, licensees and regulated persons are expected to define their strategy on the use of AI, which takes account of their operating environment, associated risks, and any positive implications from the application of AI. This strategy, if carried out effectively, can be used to ensure governance arrangements are proportionate to the risks associated with the use of AI. It is expected that any strategy be agreed by the board (or equivalent governing body), communicated and reviewed periodically, in accordance with company policy, and amended due to any significant or relevant learning or change in operational environment.
3.4 Any licensee or regulated person considering using AI should clearly articulate the benefits of the outcomes of the AI use and arrangements that are in place for the proportionate management of the risk associated with its use. This articulation is essential to assure their customers, and the wider public are aware of their AI use, and services, are safe, secure, fair and sustainable.
Practice 2: effective accountability and governance
3.5 Effective accountability should ensure that the use of AI leads to positive outcomes. There is a strong link between accountability and trust in AI. Adequate governance and accountability arrangements should lead to acceptable standards, quality, and performance and consequentially establish trust in a specific application of AI or the technology itself.
3.6 Effective governance applies across the AI life cycle from concept to decommissioning. It should include specification, model development, model acquisition and the associated supply chain, model training, data management including data protection, deployment, monitoring, suitable protection and mitigation against faults, consideration of the need for redress, and system modification. Further detail on our expectations around supply chain management are set out in Appendix 3, on data use and management in Appendix 4, and AI and cyber security in Appendix 5.
Practice 3: clear guidelines and policies
3.7 Businesses employing AI in their operations are expected to establish clear guidelines and policies for its use. They are accountable for the consequences of AI use within their organisation and to their users, requiring robust risk management strategies and response plans for potential AI-related incidents.
Practice 4: clear role expectations across the organisation
3.8 Outlined below is an overview of aspects of governance that are expected to be considered for inclusion into existing arrangements to take account of specific characteristics of AI.
3.9 Board
a. Strategies and organisational arrangements for the use of AI are expected to be driven at board and senior management levels.
b. Board oversight over AI can reside with the full board, an existing committee (for example, audit or technology, cyber security), or a newly formed committee dedicated to AI.
c. Board members are expected to be accountable, have the ultimate oversight, and own the role of delegating responsibilities. This may include top-down strategy and principles, and policies that take account of strategic context and reputational issues. To undertake this role effectively, a board is expected to have the necessary competence, access to reporting metrics, opportunities to have meaningful discussion on AI and be aware of any benefits, assumptions and potential shortcomings of models and analyses. Competencies may include for example, appointing board members with expertise in responsible AI (technical and non-technical), ethics and consumer interest to be able to effectively oversee AI opportunities and risks. These can be developed by co-opting specialist board members, undertaking training and coaching sessions.
d. Where appropriate, the board is expected to drive the development of or ensure the adequacy of existing arrangements for:
1. rules, controls, and policies for the stakeholder’s use of AI, including its outcomes
2. defining senior management roles and responsibilities
3. arrangements for internal audit
4. compliance structures and measures
e. The board, or equivalent, is expected to ensure suitable and sufficient review of AI policies and practices are undertaken, including against legal, societal, and technological developments to ensure risk policies continue to align with good practice. Boards are expected to also consider the need for independent verification or auditing of their arrangements.
f. Existing organisational policies should be reviewed to assess whether there is a need for any change due to the use of AI systems.
3.10 Management
a. Senior management is expected to supply the Board with regular reports on any potential impact the use of AI may have on safety, security, fairness, or sustainability. These reports are expected to be prepared and approved by suitably qualified and experienced personnel from within the organisation or third parties. For example, these reports may include:
1. AI risk status and trajectory, including risk within the supply chain
2. AI performance (supported by key performance indicators)
3. operational experience associated with the use of AI
4. outcome of any review (for example, risk audit, impact assessment)
5. outcome of any scenario planning to anticipate and mitigate potential risks (for example, data misuse or unintended AI consequences, including to the end users of the AI systems for example, consumers)
b. Senior management is expected to ensure roles and responsibilities are cascaded down from the board (or equivalent) with clear and well-understood channels for communicating and escalating risks. These are expected to be defined such that employees and third-party personnel understand their roles and responsibilities. These roles and responsibilities are expected to be kept up to-date and form part of job descriptions.
c. A responsible, accountable, consulted, and informed (RACI) matrix can be used to highlight role interdependencies to provide clarity, alignment, and set expectations of people, their roles, and their responsibilities within the organisation with regards to AI development and AI use.
d. Arrangements are expected to be supported by a proportionate risk management system. This does not need to be AI-specific but needs to account for the characteristics of AI. These arrangements are expected to:
1. define scope and any risk appetite to systems containing AI and any associated data
2. define the company’s approach to managing the AI system and associated data, including appropriate policies, standards (see Appendix 2 for AI standards), processes, procedures and, or practices which translate and embed the board’s direction into business-as-usual activities
3. define the governance roles and responsibilities including risk ownership and risk mitigation
4. be regularly reviewed in accordance with company policy and kept up to date, to ensure it is in line with the latest and developing AI risks and company risk appetite
5. involve the implementation of robust change control and data governance measures
e. Those with management responsibility are expected to be given the necessary authority to make decisions and drive risk management activities in line with the guidance from the board. Suitable structures are expected to be in place to support the delegation and escalation of AI risk management decision making to the appropriate level with decision-makers given the necessary authority to make those decisions and drive risk management activities in line with the guidance from the board or equivalent.
3.11 Project
a. There should be clear and agreed justification for the use of AI with goals and outcomes defined. Within a project, arrangements should be in place to ensure adherence to relevant good practice through planning, documenting, monitoring, and escalating to ensure ethical use of AI and governance goals are embedded, including:
1. change control
2. data governance and risk management
3. testing models and procedures
4. regular review and internal audit
5. recordkeeping, including but not limited to board and other meeting minutes and associated supporting materials
6. ways to educate users about the use of AI and identifying errors and mitigations
3.12 Frontline user
a. AI is an efficiency tool that is now in use across a broad range of stakeholders for day-to-day tasks. Accountability arrangements should be realistic in their approach to AI use. If accountability arrangements prevent AI use, it may be used anyway and result in poor governance. Therefore, aspects such as risk identification, incident reporting, and training enable effective staff understanding and compliance on AI use are key for successful AI governance regardless of whether AI is permitted or not.
Wider considerations
3.13 When considering governance of AI, licensees and regulated persons must comply with the current regulatory framework. Policies and guidance relating to AI use are expected to be updated regularly to ensure they remain relevant and effective, including in relation to their supply chain, given the rapid advancements in AI and AI frameworks.
3.14 Stakeholders are expected to make effective use of audit and interpretability tools based on the complexity of the AI systems and the specific requirements of any audits and licence conditions, noting this can include the use of simpler AI models to help explain more complex AI models.
3.15 Policies and guidance relating to AI use are expected to be updated regularly to ensure they remain relevant and effective given the rapid advancements in AI and AI frameworks.
3.16 Stakeholders are expected to establish robust monitoring and oversight processes for auditing and reviewing AI systems to ensure they remain compliant with internal guidelines and AI regulation.
3.17 It is also advised to engage with a diverse range of stakeholders to promote representation and inclusivity in AI development and use, including energy customers and AI domain experts. This should contribute towards a well-rounded approach to AI accountability and effective management of risks.