Throughout June 2023 we are showcasing interdisciplinary artificial intelligence (AI) research at the U of A that demonstrates how the university is leading with purpose to make AI safer, reliable and more just.
Nidhi Hegde is an associate professor in the Department of Computing Science at the U of A and a fellow and Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute (Amii).
In this week’s spotlight, Nidhi speaks to how ensuring fair outcomes across groups of different race, ethnicity, gender and age in machine learning models contributes to the responsible development of AI.
What is AI?
People use the term AI to refer to algorithms that are designed for a certain task that learn from the data on which they are eventually designed to act on. AI is actually an umbrella term for many things, and what I just described is more like machine learning. AI includes machine learning, but also includes things like robotics and computer vision. Generally, an automated adaptation to data or learning about how things should function is referred to as AI.
Briefly explain your field of research and how it involves AI.
My research focuses on privacy and fairness in machine learning. I'm interested in looking at whether a machine learning model that has been trained is privacy preserving in the sense that you are not able to infer private information about individuals that wasn't already in the data, or you are not able to infer that a certain person was involved in the training of a machine learning model.
Fairness refers to the outcomes of machine learning models, and whether these outcomes are similar across different subgroups in the population. We don't want to have a higher accuracy for a majority subgroup versus a minority subgroup along the lines of race, ethnicity, gender, age — a lot of what we consider sensitive features or demographic factors. In that sense, we don’t want outputs of machine learning to be unfair. I'm interested in analyzing whether there are issues of unfairness in machine learning models, and more importantly, designing algorithms that already are fair, with respect to these subgroups.
How is AI affecting our lives and what is a common misconception people have?
AI is affecting every aspect of our lives in many different ways. It might be small things like Amazon recommending a book for you, which we've been used to for quite some time, but it could also be a new bus line that was put in because data was used to determine it was the most optimal. In healthcare, there might be diagnostic tools being developed that use AI. I think it would be hard to find an aspect of our lives that is not impacted by AI today.
A common misconception is the intention people ascribe to AI. A good example is the recent ChatGPT phenomenon, where people are surprised at how this tool is generating these texts and equating this to artificial general intelligence. People are ascribing too much intelligence or power to AI, whereas it is trained to perform certain tasks.
With respect to my work, a misconception I often see is people thinking everything about AI is great. It's just making our lives easier or it's automating everything. Because of the lens through which I do my work, I see that there are other impacts of AI that we're not paying enough attention to. We should keep in mind that these AI tools or services are built for certain purposes. We're not quite at that stage where robots are going to lock the doors on us or something, but it's still something we need to keep an eye on and be aware of.
What is the long term future of AI? And how is the U of A leading in this space?
I think the answer to that changes all the time — things that we would have thought were very far in the future a few months ago are suddenly here in the short term — so we need to be adaptive to whatever is happening in AI. We couldn't have known eight months ago that so many people would use ChatGPT and how it would impact a lot of lives in many different ways. We couldn't have imagined this multi-modal model that could generate text based on all of these different kinds of data, or the diffusion models that are creating images.
What I would like to see in the long term is responsible AI development that impacts us in a positive way and a concerted effort for collaboration and to make sure that AI is being developed uniformly across different types of institutes, so that we don't have large companies who are able to afford the resources necessary doing all of the AI work.
In the long term, AI is going to be really integrated into many different parts of our lives, and I think the U of A is really well situated for that. Not only do we have a very strong computing science department that's looking at the essentials, the core algorithms of AI, but we also have a lot of other faculties and departments doing the engineering part, applying AI to different fields. This is going to see a lot of growth. For decades, people have been working on the essentials of getting an algorithm to work or scaling AI in a large way. Now we're at a point where we can implement this in many different aspects and I see that there are already a lot of different faculties at the university using AI in their fields. There is a lot of strong research happening at the university.
Over twenty years ago, the Government of Alberta invested in Amii, the U of A and the promise of AI here in Alberta. Since then, we've seen continued investment in growing our local ecosystem through groups like CIFAR and Alberta Innovates. This support has been instrumental in creating the conditions for industry, academia and government to come together to accelerate positive impact for society. It's an amazing time of opportunity for the field of AI.
What do people entering the workforce need to know about AI?
People will need to learn to be adaptive to how AI will change their field because it’s most likely they won’t have the same job in 30 years. At a minimum, they need to be aware of the many different uses of AI — working with data, data science, machine learning; the methods that lead to AI — because it will impact their job. There will be a new method that automates things or makes things easier so they will have to adapt to a new role that is uniquely human.
This conversation has been edited for brevity and clarity.
Nidhi is one of 26 faculty members and Amii fellows who are Canada CIFAR AI Chairs.
Through a new investment, the U of A and Amii will soon welcome 20 faculty members whose work reflects the technology’s transformational impact in health, energy, Indigenous leadership and more. Learn about how these new researchers will help shape the evolving landscape of AI.
About Nidhi
Nidhi Hegde is an associate professor in the Department of Computing Science at the University of Alberta and a fellow and Canada CIFAR AI Chair at Amii. Before joining the U of A in February 2020, she spent many years in industry research working on many interesting problems. Most recently Nidhi was a research team lead at Borealis AI (a research institute for the Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applications for the bank. Prior to that, Nidhi spent many years in research labs such as Bell Labs, Technicolor and Orange.
Innovator Spotlight is a series that introduces you to a faculty or staff member whose discoveries, knowledge and ideas are driving innovation.
Do you know someone at the U of A who is transforming ideas into remarkable realities? Maybe it’s you! We are interested in hearing from people who are helping shape the future, improving quality of life, driving economic growth and diversification and serving the public. We feature people working across all disciplines, whether they are accelerating solutions in energy, shaping the evolving landscape of artificial intelligence or forging new paths in health and Indigenous leadership.
Get in touch at blog@ualberta.ca.