AI in the energy sector guidance consultation
Glossary
Please note that terms in the glossary are taken from the references.
Adaptivity: the ability to identify patterns, reason, and make decisions in contexts and ways not directly envisioned by human programmers or outside the context of a system’s training data. (DSIT, 2024)
AI agents or agent, autonomous agent: AI systems that are capable of accomplishing multi-step tasks in pursuit of a high-level goal with little or no human oversight. AI agents may do things like browsing the internet, sending emails, or sending instructions to physical equipment. (DSIT, 2024)
AI deployers: any individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be internal, where a system is only used by the developers, or external, allowing the public or other non-developer entities to use it. (DSIT, 2024)
AI developers: organisations or individuals who design, build, train, adapt, or combine AI models and applications. (DSIT, 2024)
AI end user: any intended or actual individual or organisation that uses or consumes an AI-based product or service as it is deployed. (DSIT, 2024)
AI life cycle or AI product life cycle: all events and processes that relate to an AI system’s lifespan, from inception to decommissioning, including its design, research, training, development, deployment, integration, operation, maintenance, sale, use, and governance. (DSIT, 2024)
AI risks: the combination of the probability of an occurrence of harm arising from the development or deployment of AI models or systems, and the severity of that harm. (DSIT, 2024)
Algorithm: a set of instructions used to perform tasks, such as calculations and data analysis, usually using a computer or another smart device. (UK Parliament POST, 2024)
Algorithmic bias: AI systems can have bias embedded in them, which can manifest through various pathways including biased training datasets or biased decisions made by humans in the design of algorithms. (UK Parliament POST, 2024)
Algorithmic transparency: the degree to which the factors informing general-purpose AI output, for example recommendations or decisions, are knowable by various stakeholders. Such factors might include the inner workings of the AI model, how it has been trained, what data it is trained on, what features of the input affected its output, and what decisions it would have made under different circumstances. (DSIT, 2024)
Alignment: the process of ensuring an AI system’s goals and behaviours is in line with its developer’s values and intentions. (DSIT, 2024)
Artificial intelligence (AI): describes computer systems which can perform tasks usually requiring human intelligence. This could include visual perception, speech recognition or translation between languages. (NCSC, n.d.)
Artificial general intelligence (AGI): a potential future AI system that equals or surpasses human performance on all or almost all cognitive tasks. A few AI companies have publicly stated their aim to build AGI. However, the term AGI has no universally precisely agreed definition. (DSIT, 2024)
Autonomy or autonomous: capable of operating, taking actions, or making decisions without the express intent or oversight of a human. (DSIT, 2024)
Automated decision-making: automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on factual data, as well as on digitally created profiles or inferred data. (ICO, n.d.)
Black box: a system, device or object that can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings. (ICO, 2023)
Capabilities: the range of tasks or functions that an AI system can perform and the proficiency with which it can perform them. (DSIT, 2024)
Cognitive tasks: tasks involving a combination of information processing, memory, information recall, planning, reasoning, organisation, problem solving, learning, and goal-oriented decision making. (DSIT, 2024)
Compute: computational resources, required in large amounts to train and run general-purpose AI models. Mostly provided through clusters of graphics processing units (GPUs). (DSIT, 2024)
Deep learning: a set of methods for AI development that leverages very large amounts of data and compute. (DSIT, 2024)
Deployment: the process of releasing an AI system into a real-world environment, such as a consumer-facing AI system. (DSIT, 2024)
Disinformation: deliberately false information generated or spread with the intent to deceive or mislead. (DSIT, 2024)
Foundation models: machine learning models trained on very large amounts of data that can be adapted to a wide range of tasks. (DSIT, 2023)
Frontier AI: highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. (UK Parliament POST, 2024)
Generative AI: an AI model that generates text, images, audio, video or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on. Generative AI applications include chatbots, photo and video filters, and virtual assistants. (UK Parliament POST, 2024)
Input (to an AI system): the data or prompt fed into an AI system, often text or an image, which the AI system processes before producing an output. (DSIT, 2024)
Large language model (LLMs): machine learning models trained on large datasets that can recognise, understand, and generate text and other content. (DSIT, 2024)
Machine learning (ML): the set of techniques and tools that allow computers to ‘think’ by creating mathematical algorithms based on accumulated data. (ICO, 2023)
Massive multitask language understanding (MMLU): a widely used AI research benchmark that assesses a general-purpose AI model’s performance across a broad range of tasks and subject areas. (DSIT, 2024)
Memorisation: a phenomenon in which AI tends to memorise specific details from examples rather than learning general patterns, affecting model generalisation, security, and privacy. (Jiaheng Wei, 2024)
Misinformation: incorrect or misleading information, potentially generated and spread without harmful intent. (DSIT, 2024)
Model drift: where the domain in which an AI system is used changes over time in unforeseen ways leading to the outputs becoming less statistically accurate. (ICO, 2023)
Model poisoning: attacks in which the model parameters are under the control of the adversary. Model poisoning attacks attempt to directly modify the trained machine learning model to inject malicious functionality into the model. (NIST, 2024)
Narrow AI: an AI system that only performs well on a single task or narrow set of tasks, like sentiment analysis or playing chess. (DSIT, 2024)
Open-ended domains: scenarios or environments that have a very large set of possible states and inputs to an AI system, so that developers cannot anticipate all contexts of use, and thus cannot test the AI’s behaviour in all possible situations. (DSIT, 2024)
Open source: open source often means the underlying code used to run AI models is freely available for testing, scrutiny and improvement. (UK Parliament POST, 2024)
Pre-training: the first stage of developing a modern general-purpose AI model, in which models learn from large amounts of data. Pre-training is the part of general-purpose AI training that requires the most data and computational resources. (DSIT, 2024)
Prompt injection: attacker technique in which a hacker enters a text prompt into a large language model or chatbot designed to enable the user to perform unintended or unauthorised actions. (NIST, 2024)
Risk factors: elements or conditions that can increase downstream risks. For example, weak guardrails constitute a risk factor that could enable an actor to malicious use an AI system to perform a cyber attack (downstream risk). (DSIT, 2024)
Responsible AI: often refers to the practice of designing, developing, and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust and upholding privacy rights. (UK Parliament POST, 2024)
Safety and security: the protection, wellbeing, and autonomy of civil society and the population. In this publication, safety is often used to describe prevention of or protection against AI-related harms. AI security refers to protecting AI systems from technical interference such as cyber-attacks or leaks of the code and weights of the AI model. (DSIT, 2024)
Shadow AI: a form of Shadow IT. Unknown AI assets, for example services and components, are used by individuals within an organisation for business purposes. These assets are not authorised, governed, accounted for by asset management, nor aligned with corporate IT processes or policy. (NCSC, n.d.)
Shadow IT: refers to the unknown IT assets that are used within an organisation for business purposes. These assets are not accounted for by asset management, nor aligned with corporate IT processes or policy. (NCSC, n.d.)
Synthetic data: data, such as text and images, that has been generated artificially, for instance by general-purpose AI models. Synthetic data might be used for training general-purpose AI models, such as in cases of scarcity of high quality natural data. (DSIT, 2024)
Transfer learning: a machine learning technique in which a model’s completed training on one task or subject area is used as a starting point for training or using the model on another subject area. (DSIT, 2024)
Transformer architecture: a deep learning architecture at the heart of most modern general-purpose AI models. The transformer architecture has proven particularly efficient at converting increasingly large amounts of training data and computational power into better model performance. (DSIT, 2024)
Training datasets: the set of data used to train an AI system. Training datasets can be labelled, for example pictures of cats and dogs labelled ‘cat’ or ‘dog’ accordingly or unlabelled. (UK Parliament POST, 2024)
Use case: an AI application or problem an AI system intends to solve. (ICO, 2023)
Weights: parameters in a model that are akin to adjustable dials in the algorithm. Training a model means adjusting its parameters to help it make accurate predictions or decisions based on input data, ensuring it learns from patterns it has seen. (DSIT, 2024)
White box: a system deployed without restrictions such that a user can access or analyse its inner workings. (DSIT, 2024)