Frontier AI poses significant risks (Bengio et al., 2025). It broadens access to tools for generating deceptive or harmful content (Helberger & Diakopoulos, 2023), exacerbates national security threats by enabling sophisticated offensive cyber capabilities (Hazell, 2023; Anthropic, 2025), heightens inequalities through biased outputs (Gallegos et al., 2023), to cite a few. Traditionally, risk management offers a useful framework to address product-safety and organisational risks for other critical sectors (cite). Risk management processes operate at multiple levels: through high-level principles and processes for managing risks to organizations (e.g., ISO 31000); through sector-specific standards for managing risks associated with particular classes of products (e.g., ISO 14971 for medical devices); through guidance on selecting among relevant risk assessment techniques at different stages of the risk management process (e.g., IEC 31010:2019); and through overarching frameworks for integrating safety considerations across the risk management process (e.g., ISO/IEC Guide 51).
In the context of AI, existing risk management standards primarily address narrow AI systems (e.g., ISO/IEC 23894, ISO/IEC 42001). These instruments were largely developed prior to the emergence of “frontier” or “general-purpose” AI, a development that both amplifies existing risks and introduces qualitatively novel challenges. Not only is there an exceptional lack of stable scientific consensus resulting from the rapid pace of technological change (Roberts & Ziosi, 2025); safety practices for frontier AI that are emerging are not adequately aligned with, or may even undermine, established risk management processes (cite). Concurrently, improvements to and proposals for Frontier AI Risk Management are being pursued along several distinct fronts. These include easily updatable, specific technical guidance (e.g., FMF, UK AISI), mapping the existing consensus on AI safety risks (e.g., international scientific report), independent proposals (e.g., CLCT, SaferAI, Shanghai AI Lab & Concordia AI, 2025) and regional efforts (EU CoP, NIST Generative AI Profile, TC260) for dedicated frameworks for frontier AI risk management. Without a field-orienting/coordinating function, however, these efforts risk relying on flawed assumptions about the state of the field, they may fail to deliver targeted, meaningful progress, generate duplicative work, and create confusion or divergence over what should be applied in which contexts. In the end, they risk reproducing, rather than resolving, existing problems.
To prevent this, we propose to systematically surface open problems in the field of frontier AI risk management. We take a problem-oriented approach to advance the field by shedding light on what needs addressing, historically common in other disciplines (see Phil of Sc. and Maths), and recently used to advance other challenges in frontier AI (see Reuel et al., ; Casper et al., Fazl’s). Our goal hereby is twofold 1) pave the way for future solutions by formulating research questions and pinpointing which actors ought to pursue them 2) highlight which challenges must be addressed such that meaningful and robust consensus on AI risk management can be pursued. We do so by systematically examining each stage of the risk management process, reviewing existing relevant literature for each stage and spotting the “open problems”.
By “open problems,” we refer to unresolved issues concerning the processes and techniques that organizations must implement to manage AI-related risks effectively. Accordingly, the paper does not focus a priori on a predefined set of risks from AI, but rather on the organizational and procedural mechanisms through which risks are identified, assessed, and managed. While the analysis primarily concerns strategies available to organizations developing, deploying and integrating AI systems, it also considers the roles of other relevant actors, such as regulators, academic researchers, standards developers, and third-party auditors, insofar as they are relevant to shape or support effective risk management processes. Additionally, given that different “kinds” of open problems may require different approaches to address them, we have classified identified open problems according to whether they reflect (a) a lack of scientific (or technical) consensus, (b) misalignment with or challenges to established risk management frameworks, or (c) shortcomings in implementation or application despite consensus and alignment. While this classification can help inform which kinds of efforts are needed to address them, we have refrained from proposing specific solutions as they may be best formulated by the actors best placed to address them. The concrete outcome of this work is a field-orienting reference document, complemented by a living repository hosted online, intended to help relevant stakeholders identify gaps, coordinate action, and collectively advance better practices.
We recognise a few caveats and limitations. While our approach is systematic, the list of problems does not aim to be exhaustive, but at best illustrative of a range of relevant problems. Many of the open problems discussed arise precisely because there has already been substantial progress in these areas such that underlying challenges are becoming visible. Consequently, areas where we have identified relatively few open problems should not be understood as being more well developed or of lower importance; but instead as areas that remain insufficiently understood and explored such that we can clearly identify and articulate the relevant challenges. We hereby use the term “problems” as a useful heuristic that should not be taken to describe issues that are inherently negative nor fully solvable, but that also includes persistent challenges that need to be constantly managed, productive disagreements or differing approaches with their own advantages and disadvantages. The aim of this work is therefore to surface and clarify such issues, rather than to claim their definitive resolution.
In order for this document to encourage alignment between traditional risk management and frontier AI risk management practices and frameworks, the structure of the paper will survey – as much as possible – the open problems found following the high-level structure of existing risk management standards. The document is organised in the following sections: 1. Planning the Assessment, 2. Risk Identification, 3. Risk Analysis, 4. Risk Evaluation, and 5. Risk Treatment. We leave out transversal aspects such as Communication and Consultation, Monitoring and Review, and Recording and Reporting, also presented in “risk governance” in other recent framework proposals (cite), in order to keep the scope manageable. However, we do not exclude their inclusion in further iterations.