Integrating Fairness Visualization Tools in Machine Learning Education

Resource added

Full description

AUTHORS

  • Afra Mashhadi, Computing and Software Systems, STEM, UW Bothell
  • Annuska Zolyomi, Computing and Software Systems, STEM, UW Bothell

ABSTRACT

As demonstrated by media attention and research, Artificial Intelligence systems are not adequately addressing issues of fairness and bias and more education on these topics is needed in industry and higher education. Currently, computer science courses that cover AI fairness and bias focus on statistical analysis or, on the other hand, attempt to bring in philosophical perspectives that lack actionable takeaways for students. Based on long-standing pedagogical research demonstrating the importance of using tools and visualizations to reinforce student learning, we ask the research question as to what visualization and interactive tools can act as a resource for students examining algorithmic fairness concepts. Through qualitative review and observations of four focus groups, we examined six open-source fairness tools that enable students to interact with tools which take the input data from the student and visualize the outcome for various demographic groups, enabling them to quantify and explore algorithmic biases. Our study shows that students found interactive tools and tools that allowed for examining custom high-dimensional datasets (such as images) reinforcing their learning as measured by survey and interviews. Such tools helped with bridging the gap between theoretical concepts of fairness and observable consequences of biased decision making by studying counterfactual points. The findings of this study provide insights into the benefits, challenges and opportunities of integrating fairness tools as part of machine learning education.

SUMMARY

RESEARCH QUESTION

How can we teach CS students about ethics and responsible AI?

RESEARCH METHODS / SCHOLARLY BASIS

Focus group.

RESULTS

See our full paper.

APPLICATION

See our full paper.