Can good policy save us from the potentially deleterious effects of AI? It’s time to look deeply into this question, now that AI is weaving its way into all of our lives. Ashley Casovan, ’07 BA, deeply understands AI policy development in her role as managing director of the New Hampshire-based AI Governance Center at IAPP, a global professional association for privacy, AI governance and digital responsibility. Casovan, a Montreal resident, is also an advisor to and former executive director of the Responsible AI Institute, a nonprofit dedicated to mitigating the potential harms of AI. The institute aims to equip organizations with tools for AI governance and compliance that align with emerging AI regulations and ethical standards. She’s also an adjunct professor of public policy and administration at Carleton University and a board member of the Vector Institute, which aims to advance safety in AI.
Named to the 100 Brilliant Women in AI Ethics list for 2024, Casovan traces her status as a leader in policies towards safer artificial intelligence to her background in arts, specifically political science. She first fell for policy in a course she took from professor emeritus Ian Urquhart at the U of A. “His practical approach to the implications of policy on society just opened my eyes to the importance of getting policy right,” she says. “That definitely not only shaped who I was, but changed the course of how I approach my work.” She shared a few ideas with us recently.
1: Just providing content isn’t enough
In 2008, Casovan learned about the power and threats posed by data while she was working as a community organizer for the New Hampshire Democratic Co-ordinated Campaign on behalf of then-presidential candidate Barack Obama. “I became really interested in the use of data and how it’s leveraged through technology: how that can be a benefit, but also how it can infringe on people’s privacy, or lead people to make the wrong decisions when the right policies and processes aren’t in place to support it,” she says.
Casovan later led development of Open.Canada.ca, the federal government’s effort to provide transparency in the data it collects. “I saw that the objective of releasing data wasn’t enough, that you needed to have the policies to support that data being used in a safe and responsible manner,” Casovan says. “So I became the director of data and digital, where I worked on AI policy for the government’s use of the AI systems.”
2: Artificial intelligence has power for good
AI refers to the ability of a computer to undertake tasks usually associated with human intelligence. With a collection of technologies, computer systems are trained to learn from data, adapt to new data and make decisions, similar to or better than human cognitive abilities.
Casovan says AI can track trains to prevent derailments, speed research to find ways to diagnose and treat rare diseases, forecast which jobs will be needed in the future and predict the education, housing and immigration needed to support the people who hold those jobs.
3: AI also has the potential for harm
Casovan has worked throughout her career to help develop policy guardrails. Without these guardrails, AI systems can be created in ways that cause harm. For example, training data might build gender or racial bias into a system or allow for improper tracking of people through facial recognition.
“If there’s not a human actually doing the investigation and it’s just relying on a machine — that’s where there are a lot of concerns,” Casovan says. She stresses the importance of the work her organization does to train people who manage these systems to do so in a safe way, saying that it aligns “with the important work of higher educational institutions like the U of A.”
4: It’s possible to mitigate AI harm
Casovan’s recent work has focused on employment, including the use of artificial intelligence to filter job applications, which can streamline the application review but also exclude people based on race or gender. AI is also being used in interviews with the intent of detecting a person’s emotions.
“We all have different personalities, we all have different ways of being able to emote and convey those emotions,” Casovan says. “To have a machine predict that for each individual is something that is scary and has not been found to be accurate.”
Her work focuses on taking an understanding of how the systems are being used, then identifying the harms or risks that can come from their use. “I definitely believe that we should be using these systems, just doing so in a way that's mitigating risk,” she says.
5: AI’s power is daunting, but working with it is satisfying
“It can be overwhelming, because AI is in the news all the time now and so we’re keenly aware of all of the problems,” she says. “But I get satisfaction from being able to work on something that’s impacting people, and that people really care about.”
The pace of AI development creates challenges, but Casovan sees progress with businesses and organizations as they prepare themselves and their teams to create trustworthy policies around AI governance, making responsible use of AI part of their regular operations.
We at New Trail welcome your comments. Robust debate and criticism are encouraged, provided it is respectful. We reserve the right to reject comments, images or links that attack ethnicity, nationality, religion, gender or sexual orientation; that include offensive language, threats, spam; are fraudulent or defamatory; infringe on copyright or trademarks; and that just generally aren’t very nice. Discussion is monitored and violation of these guidelines will result in comments being disabled.