The film Coded Bias makes an excellent contribution to a dialogue that is far too limited in education. My comments and perspectives are based on my career as a K-12 teacher and now working in teacher education in post-secondary with an interest in transforming teaching and learning. I would like to thank Shalini Kantayya and everyone involved in the film making for provoking this much needed dialogue to help guide the way forward as machine learning and artificial intelligence (AI) continue to evolve and impact all aspects of society. The film reminds us that societal biases can be encoded in algorithms unknowingly or unintentionally and can lead to algorithmic bias, a problem that may not be easily detected. The use of algorithms can lead to important decisions that affect people’s lives. As shown in the film, it’s possible for an algorithm to provide an invalid assessment of an exemplary teacher that can impact employment, retention or tenure. Similarly, invalid assessments of students can impact admissions, program advancement, assessments and decisions related to their academic conduct. What are the imperatives for education? For educators, for schools, for curricula? I would like to discuss three imperatives (I’m sure there are many more):
First, biases need to be critically examined. I often refer to the double-edged sword of innovation. With AI for example, there can be extraordinary opportunities for improvement, such as increased efficiency but there can also be significant consequences, such as the invasive surveillance shown in the film. Technology can be helpful and at the same time technology can also cause undue harm. AI can be developed for seemingly good purposes and with intent to be harmless not harmful. However, there can be insufficient attention to the biases in designs. In teaching we refer to teachers as designers of learning and recognize that each teacher has bias, each curriculum designer has bias, each curriculum has bias. The film demonstrates why it is important for designers in any field to analyze bias in their designs. Bias in designs need to be critically analyzed and questioned from multiple perspectives; bias needs to be discovered and uncovered at the very early stages in the design process. Too often designers move from prototype to testing or from draft curricula in education to pilot phases without critically examining and limiting the biases.
A second imperative is to raise the expectations and standards for ethics in designs.
In education we need transparency and accountability for algorithms that are used that have potential to impact overall advancement of individuals. There needs to be full disclosure of the algorithms and there needs to be regulations for their use. We need to question the ethics and raise the standards when using AI as the first step and first stop in making important decisions that have human impact. False positives can have a significant negative human impact.
A third imperative is to take responsibility and assume a role in protecting integrity. We all have a role and responsibility to protect the integrity of a meaningful world. In my role as an educator and scholar in education, and an academic coordinator for a graduate program called Leading and Learning in a Digital Age, I aim to design courses and conduct research and continually interrogate and critically examine implications of innovations in education. We need to advocate for, look for and consider plausible consequences when designing learning or when faced with testing or piloting any new inventions and innovation. As a society, how might we take action? How might we advance high standards of the technologies we use with learners, the technologies we develop for learning, the learning designs and the curricula used?
There were three key imperatives that resonated with me from an educational perspective as I viewed the film: there is a need to critically examine the biases; there is a need to raise the expectations and standards for ethics in designs; and there is need for all of us to take responsibility and assume a role in protecting the integrity of a meaningful world.
You may find the following related links interesting ( shared by Dr. Lisa Silver, Faculty of Law, University of Calgary):
Federal Digital Charter: https://www.ic.gc.ca/eic/site/062.nsf/eng/h_00108.html
Law Commission of Ontario, The Rise and Fall of AI and Algorithms In American Criminal Justice: Lessons for Canada, (Toronto: October 2020)
Lisa Silver and Gideon Christian, “Harnessing the Power of AI Technology; A Commentary on the Law Commission of Ontario Report on AI and the Criminal Justice System” (November 18, 2020), online: ABlawg, http://ablawg.ca/wp-content/uploads/2020/11/Blog_LS_GC_LCO_Report.pdf (commenting on the LCO Report)
Recent privacy review of Clearview AI: Joint investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, the Information and Privacy Commissioner for British Columbia, and the Information Privacy Commissioner of Alberta:
Ewert v. Canada, 2018 SCC 30 (CanLII), [2018] 2 SCR 165: https://canlii.ca/t/hshjz (bias in risk assessment tools)
Multiple Reports on the issue from AI NOW Institute: https://ainowinstitute.org/reports.html.
LAWNOW Magazine – Special report on Privacy: https://canlii.ca/t/sjpm
An accessible perspective: McSweeney’s Issue 54: The End of Trust (2018)