
PROJECTS
Policy Projects
Responsible Limits on Facial Recognition Technology

Stage: pilot phase
The challenge
Over the last few years, rapid technological advances, due mainly to progress in machine learning and sensors, have fuelled the development of facial recognition technology (FRT). This has enabled its trajectory from research to adoption in the industry. The development of FRT creates considerable opportunities for socially beneficial uses mostly through enhanced authentication and identification processes. Yet, serious concerns remain about its potential use for mass surveillance and susceptibility to unfair bias.
Goal
Co-designing a certification framework to ensures the trustworthy use of facial recognition technology.
Impact
-
Providing a practical guide to implement robust risk mitigation processes to engineers and product teams developing FRT solutions.
-
Enabling policymakers to ensure that citizens' and consumers’ rights and freedoms are effectively protected while promoting responsible innovation.
Reimagining Regulation for the Age of AI

Stage: pilot phase
The challenge
Government officials throughout the world are increasingly aware of both the opportunities and risks associated with AI for public operations. They also acknowledge that some form of AI regulation is needed given the duty of care owed to citizens, particularly as governments make highly consequential decisions supported by AI. Yet, regulating AI is a complex endeavor. As a result, the adoption of AI within governments around the world is quite low.
Goal
Facilitating the adoption of responsible AI for government.
Impact
-
Fostering a transparent and inclusive public conversation about AI for governments.
-
Establishing a centre of AI Excellent to boost government’s capabilities in this domain.
-
Designing a risk/benefit assessment framework for AI in government.
Academic projects

Auditing AI. Scope, Method and Implementation
Stage: scoping phase
The challenge
Various research papers and media reports have demonstrated that even the most advanced AI systems are surprisingly brittle, susceptible to unfair bias, and vulnerable to hacking and cyberattacks. This literature has highlighted the need to develop appropriate oversight processes to ensure that the behavior AI systems are consistent with different sets of legal (e.g. EU's Non-Discrimination Law) or corporate (e.g. organizational guidelines) requirements. In response, numerous AI scholars and practitioners have argued that this could be achieved through robust audit processes. However, it remains challenging to reach a consensus on a common method to assess and if needed re-establish consistency of a given AI system with identified requirements.
Goal
Create a practical audit framework to foster the development of trustworthy AI systems.
Impact
-
Build a multidisciplinary research community focusing on the audit of AI systems.
-
Equip regulatory agencies and oversight bodies with a practical framework to assess the trustworthiness of AI systems.
-
Provide AI practitioners with a tool to proactively identify, monitor, and mitigate the risks associated with their systems.
Towards a Phenomenology of Ethical AI Expertise

Goal
Uncovering the real nature of ethical expertise as a skill acquired through involved coping.
Impact
-
Demonstrating that ethical skill cannot be fully captured by high-level principles and why persevering in this path is morally wrong.
-
Reaffirming the importance of human judgment to address the ethical effects of AI systems.
-
Encouraging industry actors and policymakers to introduce regulations limiting the delegation of highly consequential moral decisions to AI systems.
Stage: scoping phase
The challenge
Dozens of organizations have produced statements describing high-level principles and adopted a myriad of fairness toolkits to ensure the ethical design and deployment of AI systems. The main assumption underlying this development is that moral judgment can be formalized through explicit universal ethical principles that AI developers can then follow. Yet there is mounting evidence that this approach has had little impact on the AI industry. I argue that this failure is partly caused by a widespread misunderstanding of the “nature” of human ethical expertise and the actions needed for AI practitioners to reach the expert level.