AI in the energy sector guidance consultation
Competencies
Expectation
5.1 Stakeholders have the right knowledge, skills and capability to understand how AI opportunities can be realised, and any associated challenges are clearly understood and appropriately mitigated. This includes the need for resilience, scalability and robust management of the associated vulnerability, risks, and threats.
Description
5.2 For traditional software services and components, that are rules based and deterministic in nature, suitable assurance can be achieved by established testing approaches and good software engineering practices over its life cycle. Read more about this in Functional safety of electrical, electronic and programmable electronic safety-related systems (IEC 61508), on the International Electrotechnical Commission's website. With AI based technologies, that are probabilistic in nature, new challenges, issues and risks arise. This probabilistic behaviour presents challenges for the resilience of developed services but also creates risks around untested outputs that could cause components to fail in unexpected ways, creating potential vulnerabilities and opportunities for attack.
5.3 AI based components and services, whether developed in-house or externally, with hosted services to be integrated into the organisation, require specialised knowledge and skills due to both AI’s unique behaviour and characteristics, as well as the rapid pace of development and changing capabilities.
Good practice
Practice 1: robust training plans
5.4 Dependent on application and proportionate to the risk, stakeholders are expected to have a robust plan, not only to develop, but also maintain the appropriate knowledge, skills and capability in AI based technologies to ensure a robust approach to selecting, designing, developing, operating and governing the appropriate AI based solutions. Our good practice expectations are to:
a. define a base-level of foundational knowledge needed around AI for all staff, including ensuring adequate training and testing of internal knowledge for established policies and procedures around the safe, secure, fair and sustainable use of AI
b. define and agree the appropriate level of additional AI knowledge and skills needed by role across the organisation, including governance, management, technical and operational areas
Practice 2: suitably qualified decision makers and staff
5.5 Roles are expected to be assigned to personnel with the appropriate knowledge and capabilities for AI. Knowledge and capabilities are expected to be determined through robust assessment. Organisations are expected to designate an AI officer to oversee the ethical, responsible and effective deployment of AI technologies. This includes alignment with legal, regulatory and appropriate standards, managing AI risks, and fostering transparency. The AI officer is expected to ensure AI not only drives business value, but does so in responsible, transparent and compliant manner.
5.6 AI decision making roles are expected to be assigned to personnel or groups of personnel with the appropriate skills, knowledge, tools, and authority. The personnel responsible for AI risk management and treatment are expected to be:
a. suitably experienced in the design, development, use and operation of AI
b. familiar with the operational business model of the company and in relation to AI use
c. knowledgeable in the assets, as well as dependencies on any third parties used to deliver and support the AI services
5.7 Stakeholders are expected to implement a training and development plan to upskill staff around AI appropriate to their roles and responsibilities, including management, project, technical and operational roles.
Practice 3: knowledge management policies and procedures
5.8 Stakeholders are expected to ensure knowledge management policies and procedures have been updated to cover AI knowledge and skills, including managing the rapidly changing landscape of this emerging technology. This is expected to include cyber security teams having an ongoing responsibility to monitor existing and emerging threats and vulnerabilities associated with the adoption and use of AI, both within the organisation and the external threat landscape. In addition, establishing an approach to share experiences and lessons learnt from the use of AI, such as forums and communities of practice can assist in knowledge management.
5.9 Stakeholders may consider using appropriate proof of concept, research and development, learning projects and collaboration and partnerships to develop and test organisational capabilities to safely design, deliver and operate AI-based technologies.
Practice 4: horizon scanning
5.10 Stakeholders are expected to monitor and track developments in AI, including in cyber security, to identify important areas where learning and development plans are expected to be updated and access to competent people may need to be arranged.