top of page
Search
  • Writer's pictureLofred Madzou

There are reasons to be hopeful for the future of AI governance

Organizations deploying AI will have to either address the demand for trusted AI systems or be pushed out of the market.


A few weeks ago I had the pleasure of being invited to speak with the Responsible AI Podcast, hosted by Fiddler, to discuss my work in this fascinating and complex domain. Our conversation centered around my approach to Responsible AI and how this is put into practice in global governance projects, using as a case study the “Responsible Limits on Facial Recognition” project.


Feedback from colleagues and industry peers was overwhelmingly positive, supportive of a sensible and practical approach to AI governance. A minor point of contention seemed to be my suggestion that “only responsible AI companies will survive”; considering the growing number of reports of unethical uses of AI, as well as public concern about its potential adverse impacts, this statement may have seemed optimistic. I understand this perspective, and would like to take a moment to outline why – and how – I believe we’re heading in the right direction.



A Rapid Awareness


Only five years ago, we were at the peak of the AI hype. It was presented as a revolutionary technology that can solve cancer, deliver self-driving cars and offer personalized learning to all students. We can observe that none of these promises have yet materialized, but this does not mean that AI is overhyped. In fact, under the current deep learning paradigm, AI has made measurable progress, as a rapidly increasing number of businesses have reported significant benefits after AI deployment. These difficulties simply mean that there is more to intelligence than pattern recognition. Indeed, a number of valuable social and business problems are irreducible to this approach and require new thinking.


Even within areas where deep learning has achieved “human-level” performance on benchmark datasets, significant challenges remain. It turns out that the most advanced AI solutions are surprisingly brittle, data-hungry, susceptible to unfair bias, and vulnerable to hacking and cyberattacks.


AI’s limitations and vulnerabilities – which were only known by a narrow circle of experts five years ago – are now common knowledge among policymakers and business communities.

Dozens of researchers, activists, and reporters have documented these challenges. My colleagues and I have modestly tried to explain their governance implications. Thanks to these efforts, AI’s limitations and vulnerabilities – which were only known by a narrow circle of experts five years ago – are now common knowledge among policymakers and business communities. From a policy perspective, that’s remarkably fast development.


A multistakeholder demand for trusted AI systems


This rapid awareness has created a multistakeholder demand for trusted AI systems – i.e. those whose behaviors are consistent with a defined set of requirements (robust, secure, explainable, non-discriminatory, etc.) – starting with civil society. On both sides of the Atlantic, various civil right organizations have voiced increasing concern about deployments of AI that may undermine human rights. More specifically, they have asked policymakers to take appropriate action to ensure that the application of AI-powered systems to an ever-widening number of domains (e.g. employment, healthcare, criminal justice) does not serve to reinforce or widen disparities between communities.


Unsurprisingly, the most ambitious policy response has come from the European Union (EU) where the European Commission (EC) recently released its Artificial Intelligence Act - a comprehensive regulatory proposal which classifies AI applications under 4 distinct categories of risks:

  • 1) unacceptable risk: these use-cases will be banned (e.g social scoring).

  • 2) high-risk: they will be subject to quality management and conformity assessment procedures (e.g. recruitment tools, robot-assisted surgery, autonomous vehicles, credit scoring, predictive manufacturing for utilities and telecommunication).

  • 3) limited risk: they will be subject to minimal transparency obligations (e.g. chatbots).

  • 4) minimal risk: they won’t face any additional provision (e.g spam filters).


These emergent AI regulation and compliance schemes are creating unique business opportunities that thoughtful investors and entrepreneurs are rushing to seize.

The EU has long championed an ethical approach to AI centered around the protection of human rights. When adopted, EU policymakers hope that this proposal will foster the development of human-centric AI both within the EU single market and globally. Evidence suggests that US regulators are working in a similar direction: in particular, honing in on biased AI systems.


These emergent AI regulation and compliance schemes are creating unique business opportunities that thoughtful investors and entrepreneurs are rushing to seize. This is perhaps best illustrated by the growing Responsible AI startup ecosystem. Companies can now receive external guidance in order to improve privacy, explainability, fairness, and risk management processes for their AI models. To date, all top-ranked professional services firms have Responsible AI consulting practices in order to offer advice to their clients on this issue.


Taking Action


The demand for trusted AI systems is likely to increase in the coming years. As a result, organizations deploying AI will face an existential choice: either anticipate this demand and transform to become ‘responsible AI’-driven organizations, or be pushed out of the market. In the current situation, business leaders would be greatly mistaken in adopting a “wait and see” attitude: for example, thinking that one could focus on scaling AI, and then add a robust governance layer. These two activities are rapidly becoming inextricably intertwined. Put simply, only responsible AI systems can hope to scale.


While industry actors are increasingly acknowledging this reality, many remain unsure about specific next steps in ensuring a successful transformation. There exist an overwhelming number of AI governance frameworks and most of them are notoriously hard to operationalize. I suggest beginning with a careful review of the EU Artificial Intelligence Act because this Act is likely to set the global regulatory standards for AI. Then, run an internal audit of your AI systems to assess compliance with the EC requirements. If you're uncertain about how to perform such an audit, feel free to reach out. My team has designed practical toolkits that you will find useful for this process.

428 views0 comments
bottom of page