Is your decision safe from discrimination?

Artificial Intelligence (AI) is increasingly supporting decision making among organizations. However, it is not free from bias, and its use has already led to discriminatory consequences. What steps should companies take to avoid slippage?

Allowing us to benefit from data analyzes based on professional decisions based on an artificial intelligence system (SIA) makes it difficult and even impossible for us as humans to process, because their numbers are sufficient. Since AIS works on information-based learning, they reflect the system and those who provide it. Thus, they tend to reproduce or even expand the cultural and social context in which they are created, including its biases and stereotypes. Specific examples of SIAs have already emerged that have been shown to be discriminatory, especially against women (for example during recruitment) or members of visible minorities (especially with facial recognition).

Not often, there are two main reasons why AIS can produce discriminatory results. The first is the lack of consideration of the biases (cultural, social, lexical, empirical, etc.) transmitted by the designers of AIS and their impact on the results produced. Second, if they are not subject to correction, AIS’s learning or training data exacerbates the bias they present, as well as the current system bias.

Who is responsible for making the decision?

More and more companies trust an SIA to recruit people, approve them, provide them with a product or service, and ensure their safety. They legitimately hope that this SIA will not risk accusing them of making a discriminatory decision. However, the processes by which an AIS suggests a decision rather than another often takes the form of a huge “black box” for its users. This means that SIA’s internal algorithms and decision trees are unknown and difficult for users to understand.

For companies, implementing an AIS can be a significant and costly project, with sometimes uncertain returns on investment. Therefore, adding reflection and limitations in terms of equity, diversity and inclusion (EDI) may seem tempting not to try to overload the parties involved as well as the business case file of the project. Especially since, despite having a good initial purpose, it is sometimes difficult to know how to apply EDI principles in an AIS project and to ensure that the results comply with and respect these principles.

However, organizations are responsible for the decisions made; They must ensure that they understand and reduce the risk of discrimination that their SIA may create for their customers and employees. The negative effects of unfair SIA decisions prove to be significant for organizations, whether from a legal, financial or reputable point of view. Taking refuge behind the “black box” of these systems and its intelligibility or blindly believing in digital solutions is therefore not an acceptable option. In short, companies must be vigilant when it comes to implementing an AIS by adopting specific policies and practical practices to mitigate such risks.

How to implement a responsible AIS?

Discuss with a panel[1] Experts have identified a number of policies and practices necessary for the implementation of non-discriminatory and inclusive AIS:

  • Integrate EDI issues and policies into any SIA project[2].
  • Identify as soon as possible the biases that may exist in systems and processes that need to be supported by AIS.
  • To limit biases and stereotypes, define SIA project management involving all stakeholders in their diversity.
  • Add EDI parameters and indicators to the AIS project during its implementation to empower stakeholders.
  • Modify the training data used for better representation and greater fairness.
  • Simplify the language so that everyone understands the decision making process.
  • Evaluate the impact and results of AIS not only upstream, but also continuously.

The panel of experts also highlighted ways in which AI can be part of the solution, identifying and correcting biases that already exist in the system. In short, there is reason to be surprised that AI is not already forcing companies to modify their systems and implement EDI policies and programs.


Rating

[1] This article was written at an event jointly with Ronnie Aun (Valital Technologies), Alison Cohen (Mila), Shali-Fa Diop (Digno Solutions), Tania Saba (University of Montreal) and Behnaz Sabunchi (YY). February 2022 International Observatory on the Social Impacts of AI and Digital Technology (OBVIA).

[2] Other examples include: University of Montreal, Montreal Declaration for Responsible Development of Artificial Intelligence2018.

Leave a Comment