Back To Top

Aditi Saha

AI in Governance

Effectively Governing AI

Artificial intelligence systems have become increasingly prevalent in everyday life and enterprise settings, and they’re now often being used to support human decision-making.

When we understand how a technology works and we can assess that it’s safe and reliable, we’re far more inclined to trust it. But even when we don’t understand the technology (do you understand how a modern automobile works?), if it has been tested and certified by a respectable body, we are inclined to trust it. Many AI systems today are black boxes, where data is fed in and results come out. To trust a decision made by an algorithm, we need to know that it is fair, that it’s reliable and can be accounted for, and that it will cause no harm. We need assurances that AI cannot be tampered with and that the system itself is secure. We need to be able to look inside AI systems, to understand the rationale behind the algorithmic outcome, and even ask it questions as to how it came to its decision.

Hence, enterprises creating such AI services are being challenged by an emerging problem: How to effectively govern the creation and deployment of these services. Enterprises want to understand and gain control over their current AI lifecycle processes, often motivated by internal policies or external regulation.

The AI lifecycle includes a variety of roles, performed by people with different specialized skills and knowledge that collectively produce an AI service. Each role contributes in a unique way, using different tools. Figure 1 specifies some common roles.

Data flows throughout this lifecycle, as raw input data, engineered features, model predictions, and performance metric results. Data governance relies on the overall management of data availability, relevancy, usability, integrity, and security in an enterprise. It helps organizations manage their information knowledge and answer questions.

 

Various enterprises are developing theoretical and algorithmic frameworks for generative AI to synthesize realistic, diverse, and targeted data. In order to increase the accountability of high-risk AI systems, we need to develop technologies to increase their end-to-end transparency and fairness.

Tools and technologies being developed by AI enterprises must be adept at tracking and mitigating biases at multiple points along their machine learning pipeline, using the appropriate metric for their circumstances, and captured in transparent documentation. They should help an AI development team perform systematic checking for biases similar to checks for development bugs or security violations in a continuous integration pipeline.

Bringing together mitigation techniques appropriate for different points in the pipeline to address different biases (social, temporal, etc.) will help developers produce real-world deployments that are safe and secure.

error: Content is protected !!