Produced by the AI Governance Pillar at AI Singapore,
now part of the NUS Artificial Intelligence Institute
The widespread and escalating integration of AI systems in decision-making processes across diverse sectors underscores the need for a good understanding of these systems. However, the complexity and black-box nature of certain modern AI systems make them difficult to comprehend. Therefore, it is important to provide necessary and meaningful information about these complex AI systems to build user trust, ensure accountability, facilitate audit and oversight for regulatory compliance, and enhance further model improvement for developers.
Despite these needs, we remain in the early stages of determining what kind of information should be provided, to whom (i.e., users, deployers, regulators), when (i.e., ex-ante or ex-post), and why. To address this issue and derive policy and regulatory implications, AI Singapore convened this roundtable, bringing together regulators, practitioners, and academics to delve into the subject—how organizations can and should operationalize transparency/explainability.
Funding: This roundtable was funded via a charitable grant from Google.org, as part of Google’s Digital Futures Project.
