Inclusiveness in the model development is crucial to achieve fair AI.

Issues:

  1. Average Models errors hide minority groups: When reporting the accuracy of an AI model, if merely the average accuracy is reported, it often indicates that errors which are present in minority groups are hidden.
  2. Amplification of Bias through iteration of training cycles: When AI models are trained on data that possesses bias, future decisions taken by the same AI models results in the creation of more biased data, and through the iterative training and prediction process, creates a vicious circle of bias.
  3. Bias amplification through the model's lifecycle (passage of time): In the life cycle of a model, lack of inclusion at different stages causes the model to develop bias over the course of time.
  4. Unpredictable consequences due to a homogenous developer community: Biases creep in at different stages of the development process because of the lack of diversity of the developer community, relative to the userbase.
  5. Non-inclusive mechanisms for feedback (on AI models) (potential problem: lack of option to provide anonymous feedback): The process of reporting feedback on the quality of AI models is difficult for all groups to effectively utilise.
  6. Biased data sets: Bias in training data leads to biased AI models.

Solution: 

  1. Average Models errors hide minority groups: Report, and act on stratified errors wherever possible. (Have new models, include group identity, etc.)
  2. Amplification of Bias through iteration of training cycles: Periodically refresh and validate AI models.
  3. Bias amplification through the model's lifecycle (passage of time): Reevaluate inclusion goals/metrics at different stages of the models' life cycles.
  4. Unpredictable consequences due to a homogenous developer community: Provide interdisciplinary training (Humanities) to developers.
  5. Non-inclusive mechanisms for feedback (on AI models): Adapt feedback systems to cater to diverse groups. Also provide the option to provide anonymous feedback.
  6. Biased data sets: Carefully build inclusive data sets, and have processes in place to ensure quality checks.

 

Responsibility: Model Developers