Project Summary: Artificial intelligence (AI) are increasingly being developed and implemented in healthcare. This presents privacy issues since many AI are privately owned and rely on public-private partnerships and data sharing arrangements for mass quantities of patient health information. The Health Law Institute (HLI) investigated the Canadian legal and policy framework focusing on two issues: first, the potential for inappropriate treatment, use or disclosure of personal health information by private AI companies, and second, the potential for privacy breaches that use newly developed AI methods to reidentify patient health information. The HLI team analyzed Canadian legislation, focusing on the federal Personal Information Protection and Electronic Documents Act, as well as applicable common law relating to torts and fiduciary obligation and key Canadian research ethics policy, namely the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. The team's key findings and recommendations are summarized below.
Findings and Recommendations:
-
The scope of data made accessible to private AI companies should be firstly based on respect of patients’ informed consent and rights of ongoing control over their private information, and after that, proportional to the likelihood and meaningfulness of the potential benefits the AI can provide.
-
Patients have a general right to informed consent for the use and disclosure of their personal health information, and have an ongoing control interest which necessitates the need for recontact for any new uses or disclosures. Public-private partnerships implementing healthcare AI should prioritize the ability to recontact patients.
-
Patients have a general right of withdrawal from participation in healthcare AI. AI companies will need to plan for the contingencies associated with data removal after its integration.
-
Altering regulation to place more custodianship responsibility onto domestic third parties that are transferred patient health information would help contribute to the safe future implementation of healthcare AI.
-
Greater cooperation between provinces to generate more consistency in the regulation that applies to commercial AI companies could help their implementation and to encourage compliance.
-
Penalties levied against AI companies for breach of privacy requirements should in our view not be fixed or limited in any way that could fail to deter malfeasance.
-
The concept of “non-identifiable information” is increasingly questionable or even dubious. The subsection of health information that could arguably meet this standard is decreasing quickly over time. Regulators and policymakers must incorporate into their work the reality that technical methods of breaching privacy through reidentification are quickly improving.
-
Access to patient data must be predicated upon maintaining highly advanced forms of data security, and anonymization where possible. Strong privacy protection will be required in light of advancing technology that allows data to be re-identified and misused. Data security should minimize risks during data transfer, safe storage, and appropriate deletion. Further, consent requirements should disclose both any possible personal data transfers to commercial entities, and the realistic risk of privacy breach.
-
The issue of data security is shared among both institutions that grant access to patient data for AI companies to utilize, and the AI companies manipulating and/or storing patient data themselves. Responsibility for security must be shared and integration must be extensive.
-
Enforcement of very high standards for data protection will be key. Governments should consider creating interdisciplinary task forces focused specifically on creating, refining and implementing technical standards for protecting patient health information.