Withdraw
Loading…
Toward a Risk-Informed Regulatory Framework for Artificial Intelligence
Valentino, Justin
Loading…
Permalink
https://hdl.handle.net/2142/121813
Description
- Title
- Toward a Risk-Informed Regulatory Framework for Artificial Intelligence
- Author(s)
- Valentino, Justin
- Issue Date
- 2023
- Keyword(s)
- Risk-informed regulation
- Probabilistic risk assessment
- Artificial intelligence (AI)
- Abstract
- The current narrative surrounding Artificial Intelligence (AI) systems often portrays a bleak future filled with undesirable consequences. However, to usher in an era where AI can truly transform society for the better, it is essential to re-program this doom-and-gloom narrative. One avenue for advancement is using systematic risk assessment tools for risk-informed decision-making and regulation, drawing inspiration from the nuclear power industry's central risk assessment technique known as Probabilistic Risk Assessment (PRA). PRA, also known as Probabilistic Safety Assessment, has been a key pillar of safety policy setting and regulation for the U.S. Nuclear Regulatory Commission (NRC) under the Risk-Informed Regulatory Framework. It involves assessing the probability and consequences of events, expressed as the "risk triplet." This paper explores the concept of a risk-informed regulatory framework for AI safety, outlining key principles based on a framework for creating transformational organizations: (1) Purpose: Protect human societies from potential undesirable consequences of uncontrollable Artificial Intelligence (AI) systems. (2) Challenge: Decentralized and privatized development of AI systems with the potential to do harm. The risk of AI systems may be equivalent to the magnitude of high-consequence, complex, socio- technical systems under human control. (3) Function: A regulatory body capable of assessing the societal risks of AI systems, and the ability to provide guidance and oversight of the development and deployment of potentially dangerous AI systems. (4) Embodiment: A multidisciplinary regulatory commission capable of implementing and enforcing a risk-informed regulatory framework, requiring leading technology companies to perform risk assessments of proposed AI system designs. (5) Leadership: Provide a structure that encourages industry to lead in the self-assessment of potential undesirable outcomes due to their own technologies and establishes organizational safety culture around the prevention of societal harm. (6) Structure: Develop a non-partisan independent commission of subject matter experts with research and support staff, able to run panels that focus on key areas of AI safety risk, serving as a policy- making entity and knowledge resource for other branches of government. (7) Ritual: (A) Research/Plan, (B) Educate/Inform, (C) Monitor/Review (D) Regulate/Enforce. (8) Communication: Share research and evolving guidance via rulemaking and peer-reviewed open- source publications. Secure virtual code-sharing environments between regulatory experts and industry developers. (9) Belief: Existential risks and severe undesirable consequences of AI systems are preventable and avoidable. (10) Constitution: Protection of the public from undue harm. (11) Container: Creating heart-led panels and working groups that leverage human innovation and creativity to postulate the outcomes of new AI systems. (12) Perception: The risk perception of AI systems can be calibrated using realistic models of AI risk scenarios. (13) Growth: Humans can leverage AI systems for the growth and evolution of the species, and AI systems can be grown in ethical and equitable ways and provide a net benefit for humanity.
- Type of Resource
- text
- Genre of Resource
- Conference Paper/ Presentation
- Language
- eng
- Handle URL
- https://hdl.handle.net/2142/121813
Owning Collections
PSAM 2023 Conference Proceedings PRIMARY
Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…