International Scientific Report On The Safety of Advanced AI - Interim Report - Executive Summary
International Scientific Report On The Safety of Advanced AI - Interim Report - Executive Summary
Executive Summary
9
International Scientific Report on the Safety of Advanced AI: Interim Report
10
International Scientific Report on the Safety of Advanced AI: Interim Report
The pace of future progress in general-purpose AI capabilities has substantial implications for
managing emerging risks, but experts disagree on what to expect even in the near future. Experts
variously support the possibility of general-purpose AI capabilities advancing slowly, rapidly, or
extremely rapidly. This disagreement involves a key question: will continued ‘scaling’ of resources and
refining existing techniques be sufficient to yield rapid progress and solve issues such as reliability and
factual accuracy, or are new research breakthroughs required to substantially advance general-
purpose AI abilities?
Several leading companies that develop general-purpose AI are betting on ‘scaling’ to continue leading
to performance improvements. If recent trends continue, by the end of 2026 some general-purpose
AI models will be trained using 40x to 100x more compute than the most compute-intensive models
published in 2023, combined with training methods that use this compute 3x to 20x more efficiently.
However, there are potential bottlenecks to further increasing both data and compute, including the
availability of data, AI chips, capital expenditure, and local energy capacity. Companies developing
general-purpose AI are working to navigate these potential bottlenecks.
11
International Scientific Report on the Safety of Advanced AI: Interim Report
12
International Scientific Report on the Safety of Advanced AI: Interim Report
• 'Loss of control’ scenarios are potential future scenarios in which society can no longer
meaningfully constrain general-purpose AI systems, even if it becomes clear that they are causing
harm. There is broad consensus that current general-purpose AI lacks the capabilities to pose this
risk. Some experts believe that current efforts to develop general-purpose autonomous AI –
systems that can act, plan, and pursue goals – could lead to a loss of control if successful. Experts
disagree about how plausible loss-of-control scenarios are, when they might occur, and how
difficult it would be to mitigate them.
Systemic risks. The widespread development and adoption of general-purpose AI technology poses
several systemic risks, ranging from potential labour market impacts to privacy risks and
environmental effects:
• General-purpose AI, especially if it further advances rapidly, has the potential to automate a very
wide range of tasks, which could have a significant effect on the labour market. This could mean
many people could lose their current jobs. However, many economists expect that potential job
losses could be offset, possibly completely, by the creation of new jobs and by increased demand
in non-automated sectors.
• General-purpose AI research and development is currently concentrated in a few Western
countries and China. This 'AI Divide' is multicausal, but in part stems from differing levels of access
to the compute needed to develop general-purpose AI. Since low-income countries and
academic institutions have less access to compute than high-income countries and technology
companies do, they are placed at a disadvantage.
• The resulting market concentration in general-purpose AI development makes societies more
vulnerable to several systemic risks. For instance, the widespread use of a small number of
general-purpose AI systems in critical sectors like finance or healthcare could cause simultaneous
failures and disruptions on a broad scale across these interdependent sectors, for instance
because of bugs or vulnerabilities.
• Growing compute use in general-purpose AI development and deployment has rapidly increased
energy usage associated with general-purpose AI. This trend shows no indications of moderating,
potentially leading to further increased CO2 emissions and water consumption.
• General-purpose AI models or systems can pose risks to privacy. For instance, research has
shown that by using adversarial inputs, users can extract training data containing information
about individuals from a model. For future models trained on sensitive personal data like health or
financial data, this may lead to particularly serious privacy leaks.
• Potential copyright infringements in general-purpose AI development pose a challenge to
traditional intellectual property laws, as well as to systems of consent, compensation, and control
over data. An unclear copyright regime disincentivises general-purpose AI developers from
declaring what data they use and makes it unclear what protections are afforded to creators
whose work is used without their consent to train general-purpose AI models.
Cross-cutting risk factors. Underpinning the risks associated with general-purpose AI are several
cross-cutting risk factors – characteristics of general-purpose AI that increase the probability or
severity of not one but several risks:
• Technical cross-cutting risk factors include the difficulty of ensuring that general-purpose AI
systems reliably behave as intended, our lack of understanding of their inner workings, and the
ongoing development of general-purpose AI ‘agents’ which can act autonomously with reduced
oversight.
• Societal cross-cutting risk factors include the potential disparity between the pace of
technological progress and the pace of a regulatory response, as well as competitive incentives
for AI developers to release products quickly, potentially at the cost of thorough risk management.
13
International Scientific Report on the Safety of Advanced AI: Interim Report
14