This article was originally published on August 29, 2024.
The year 2024 is a pivotal year for artificial intelligence regulation around the world with the enactment of the European Union’s Artificial Intelligence Act, which is already pictured as the golden standard, not unlike what the General Data Protection Regulation had previously achieved.
Canada showed leadership of its own in 2022 with the introduction of Bill C-27 and its proposed Artificial Intelligence and Data Act (AIDA). However, since its introduction, the bill’s legislative progress through the House of Commons has been particularly slow; the prospect of federal elections in just over a year now calls into question the bill’s very future.
Where does Québec stand on this issue?
The state of AI regulation in Québec
Despite the absence of a specific bill regulating AI at the provincial level, Québec has not been standing idle. In fact, Québec is a pioneer in the field of AI governance and ethics, having adopted the Montréal Declaration for a Responsible Development of Artificial Intelligence in 2018. The result of an important citizen co-construction process, the Montréal Declaration provides an ethical framework for the development and deployment of AI based on 10 principles: well-being, autonomy, privacy and intimacy, solidarity, democracy, equity, inclusion, prudence, responsibility and a sustainable environment.
More recently, in Feb. 2024, the Conseil de l’innovation du Québec tabled a report listing 12 recommendations, including one urging the government to adopt framework legislation that would regulate the development and deployment of AI throughout society.
Statement of Principles for the Responsible Use of Artificial Intelligence by Public Bodies
The latest innovation in the regulation of AI in Québec comes from the Ministère de la Cybersécurité et du Numérique (MCN), which has adopted, under section 21 of the Act respecting the governance and management of information resources of public bodies and government enterprises, a Statement of Principles for the Responsible Use of Artificial Intelligence by Public Bodies (available in French only).
The 10 guiding principles established by the MCN to guide the use of AI by public bodies are:
- Respect for individuals and the rule of law: Responsible use of AI systems must respect the rule of law, individual rights and freedoms, the law and the values of Québec’s public administration.1 More specifically, public bodies must ensure that AI systems’ learning data and other data are lawfully collected, used, and disclosed, taking into account applicable privacy rights. For example, the Act respecting Access to documents held by public bodies and the Protection of personal information provides for the requirement to produce a Privacy Impact Assessment (PIA) for the acquisition or development of an AI solution that involves the collection, use and disclosure of personal information.
- Inclusion and equity: Responsible use of AI systems must aim to meet the needs of Québecers with regard to public services, while promoting diversity and inclusion. Any AI system must therefore minimize the risks and inconveniences for the population, and avoid causing a digital divide. Staff members of public bodies must be able to benefit from the necessary support through the introduction of mechanisms and tools, particularly when jobs stand to be transformed by technological advances.
- Reliability and robustness: Measures must be taken to verify the reliability and robustness of AI systems. Remedial and control measures must also be put in place to ensure that these systems operate in a stable and consistent manner, even in the presence of new disturbances or scenarios. Data quality is a key element in addressing the reliability and robustness of an AI system; namely, ensuring that the data is accurate and free of bias that can pose risks, cause harm, or reinforce various forms of discrimination.
- Security: Responsible use of AI systems must comply with information security obligations. Security measures must be put in place to limit the risks involved and adequately protect the information concerned.
- Efficiency, effectiveness and relevance: Responsible use of AI systems should enable citizens and businesses to benefit from simplified, integrated and high-quality public services. The use of such systems should also aim at optimal management of information resources and public services. For example, an organization can demonstrate its adherence to this principle with an opportunity case that shows how AI is critical to solving a problem or improving a process.
- Sustainability: Responsible use of AI systems must be part of a sustainable development approach. For example, an organization can demonstrate its adherence to the principle by conducting an assessment of the environmental impacts of its AI project.
- Transparency: Public bodies should clearly inform citizens and businesses about the nature and scope of AI systems, and disclose when they are used, so as to promote public trust in these tools. For example, an organization can demonstrate its adherence to the principle by providing signage to indicate to users that the service they receive is generated by an AI system.
- Explainability: Responsible use of AI systems means providing citizens and businesses with a clear and unambiguous explanation of decisions, predictions, or actions concerning them. The explanation should provide an understanding of the interactions and their implications for a decision or outcome.
- Responsibility: The use of AI systems entails responsibility, including responsibility for their proper functioning. Putting in place control measures and adequate governance, including human oversight or validation, will contribute to this.
- Competence: Public body employees must be made aware of the standards of use, best practices, and issues that may arise throughout the life cycle of AI systems in the performance of their duties, in addition to fostering the development of their digital skills. Teams dedicated to the design and development of solutions targeting these systems must develop cutting-edge expertise if they are to enable the intended delivery of simplified, integrated, and quality services by a public administration. For example, an organization can demonstrate its adherence to the principle by providing training on AI-use best practices to its staff prior to deployment.
In addition, the MCN specifies that these principles apply even when a public body uses service providers or partners to develop or deploy an AI system; each organization is therefore responsible for ensuring that its suppliers and partners adhere to these principles at all stages of a project involving the integration of AI.
The Best Practices Guide to Generative AI Use
In October 2024, the MCN published the Best Practices Guide to Generative AI Use (available in French only) as a companion to its Statement of Principles. Designed to provide public bodies with a governance framework on how to use widely available external generative AI tools, this Guide expands on the Statement of Principles with practical guidance on responsibly operationalizing these emerging technologies. The Guide offers recommendations for both organizations and individual employees with the goal of supporting public bodies as they develop their AI governance.
Public bodies should focus on integrating the following complementary principles into how they use external generative AI tools.
Protection
Public bodies and their employees are responsible for protecting the information they hold. They must safeguard sensitive or strategic information and protect personal information in accordance with applicable privacy rights.
Best practices: Protection
|
Responsibility and impartiality
Public bodies are accountable for the information they produce and share. They must therefore ensure that any AI-generated content is factual, legal and ethical. It must also be coherent, accurate, free from bias, relevant to the public body’s mandate, and used fairly.
Best practices: Responsibility and impartiality
|
Use
Public bodies must ensure that generative AI tools are used effectively and appropriately to address the needs of the organization or the public.
Best practices: Use
|
Due diligence and accountability
Public bodies need to diligently and proactively manage risks and incidents related to generative AI. They must determine whether to track and document the use of these tools to foster transparency in how and why actions are taken and decisions are made.
Best practices: Due diligence and accountability
|
Awareness, change management and well-being
Public bodies need to inform and educate people about how generative AI tools work, their strengths and limitations, and how to use them responsibly. They need to ensure that support is available to employees in the event that generative AI negatively affects their health or well-being, depending on the resources available within the organization.
Best practices: Awareness, change management and well-being
|
The MCN emphasizes that each public body must tailor these principles to their unique characteristics. They can also leverage their internal security, access to information and privacy processes to regulate the use of external generative AI tools. In all cases, it’s vital to take precautions that match how AI is being used, and appointing a person to oversee and manage these tools is a strategically smart approach.
Towards a harmonized framework for AI?
It is interesting to note that the principles outlined by the MCN are very similar to those identified by the federal government in its companion document to the Artificial Intelligence and Data Act (AIDA).
AIDA’s risk-based approach is precisely designed to align with evolving international standards in the field of AI, including the European Union’s AI Regulation, the Organization for Economic Co-operation and Development’s (OECD) AI Principles, and the National Institute of Standards and Technology’s (NIST) Risk Management Framework in the United States. The MCN also uses the OECD definition of “artificial intelligence system.”
Given the obstacles faced by federal Bill C-27, it bodes well to see the Québec government taking inspiration from this same approach to regulation, in accordance with international standards on AI. As more and more public bodies explore AI opportunities to improve their operations and the delivery of public services, the MCN Statement of Principles provides clear guidance that can be applied to all sectors of public administration, regardless of the nature of the activities or data involved.
In this context, the Ministère de l’Éducation recently published the Guide for the Pedagogical, Ethical and Legal Use of Generative Artificial Intelligence for Teachers (available in French only). This Guide, directly inspired by the MCN’s Statement of Principles, suggests principles for the use of generative AI tailored to the education sector. This initiative could serve as a first step towards the development of sector-specific principles for other strategic fields, such as health or justice, thus reinforcing ethical and coherent governance of AI across all of Québec’s public services.
Finally, to operationalize these principles, public bodies can consider putting in place an AI governance framework to strengthen their resilience in integrating AI.
Contact us
BLG’s Cybersecurity, Privacy & Data Protection group closely monitors legal developments that can inform organizations about data protection requirements in Canada. If your organization has questions about implementing an AI governance framework, please reach out to the contacts below or any other group members.
1 The MCN refers to the Déclaration de valeurs de l’administration publique québécoise (available in French only).