The conversation surrounding Artificial Intelligence (AI) has moved at breakneck speed since the end of 2022 with the release of ChatGPT and other generative AI application. In a very short time, even for the fast-paced world of technology, the change from accepting the inevitable AI to adopting it, has been pronounced. The benefits of AI have become clear. So too have the concerns, worries and risks.
Companies that have, or are looking to deploy, AI technologies are quickly discovering that regulatory approaches to AI vary significantly across jurisdictions, and across industries and sectors, particularly within financial services, insurance, health care, retail and e-commerce. According to the Organisation for Economic Co-operation and Development (OECD) there are currently over 800 AI policy initiatives from 69 countries, including the European Union. Canada itself is inching towards its own regulation of AI, with the Artificial Intelligence Data Act (AIDA) being introduced by the Canadian government as part of Bill C-27, the Digital Charter Implementation Act, 2022.
As previously discussed, the main purposes of the AIDA are to:
- regulate international and interprovincial trade and commerce in AI systems by establishing common requirements, applicable across Canada, for the design, development and use of AI systems; and
- prohibit conduct that may result in serious harm to individuals or their interests.
The pacing problem that always accompanies government attempts to regulate new technologies has become especially pronounced with worries about the “existential risk” posed by AI. With this wide swath of regulatory focus, a degree of harmonization is beginning to develop, having regard of some core principles of AI, such as: (i) transparency and explainability; (ii) reliability; (iii) robustness, security and safety; (iv) fairness and avoidance of unfair bias; and (v) ethics and accountability.
Given the interconnected nature of modern information technology systems, there will be a strong motivation towards global harmonization, as is reflected in the AIDA. What this means for your business in Canada is that close attention must be paid to global legislative and regulatory developments, as the impact of these developments will be onerous for your compliance teams, which will in turn require your compliance teams to also keep pace.
The emerging core principles of AI regulation
Consensus is emerging around the core principle that AI products and services must be trustworthy, and this must be the focus of any compliance program. The Organization for Economic Cooperation and Development (OECD), which has been working to harmonize global AI regulatory approaches, has identified transparency and explainability as key components of effective AI regulation. Internationally, regulation is pushing AI creators to commit to transparency when it comes to the inputs and outputs of AI, and the training methods used to build the AI systems.
Explainability refers to providing individuals who are receiving AI’s outputs the information required to understand how the AI system arrived at or created a given output. This requires a business to provide clear and understandable explanations about the factors and logic behind the outcome from a specific AI system. Businesses making transparency and explainability a priority in the development of their AI will make compliance a much easier task.
It is inevitable that AI systems will experience errors. Reliability and robustness concerns about the operation of the AI system pose many questions: is it achieving its intended goal? Are there verification and validation methods incorporated into the system? If the AI systems are not working as intended, how will the errors be addressed?
The life source of AI is data, and privacy is central to the development of AI systems. Businesses whose AI systems use consumer data or even more sensitive personal information to produce outputs need to conduct data privacy assessments before using AI to understand its impact. Data anonymization, data aggregation and other forms of data privacy protection need to be carefully considered and applied to instill trust in users and regulators alike. Canada, amongst other jurisdictions, has identified privacy and data protection as a critical aspect of AI regulation, making it a priority for any business looking to enter these markets.
Building out and monitoring AI systems to ensure that they are not unfairly biased or adversely impact human rights will be a prime focus of regulatory schemes.
Ethical and accountable AI is another tenet common amongst most regulatory approaches. Accountability flows from the need to develop ethical AI systems. The OECD’s definition of accountability in AI development is that of an ethical and moral expectation on businesses to ensure the proper functioning of the AI systems they design, develop, operate, or deploy, in accordance with their roles and applicable regulatory frameworks.
Even while awaiting passage of the AIDA, any Canadian organization designing, developing, deploying, or using AI systems should be developing its risk management framework and compliance programs to mitigate the various categories of risks (e.g., legal, ethical, business, reputational, etc.) that may arise from these activities.
Building your compliance program
To incorporate the key governing principles of AI, you must first identify and list the automated decision-making tools used by your business. A comprehensive AI risk-management program that includes a risk-classification system, risk-mitigation measures, independent and ongoing audits, data-risk-management processes and an AI governance structure cannot be built without such an inventory.
Some steps businesses can take to operationalize these key principles and to build out compliance programs include the following:
- Develop a clear and concise understanding of your business' use of AI. Achieving transparency and explainability requires the business to have a thorough understanding of how AI is deployed, its operational mechanisms and its impact on customers, suppliers, employees and the broader community associated with the business. Compliance officers must be able to explain how the AI systems work and how decisions are made.
- Understand the use of AI by your suppliers and third-party vendors and its impact on your customers. It is crucial to have a comprehensive understanding of how AI is utilized by your suppliers and third-party vendors, as well as its implications for your customers. This entails being well-informed about the software employed by these entities, including the underlying models and datasets used for third-party AI development and training.
- Establish a robust compliance program and team dedicated to AI. Assigning the responsibility of AI regulation to a specific individual or team is essential for ensuring compliance with AI initiatives within your organization. This designation not only demonstrates a proactive approach but also showcases a commitment to collaboration and responsiveness towards regulatory bodies. It facilitates the development and adherence to a cohesive strategy, promoting effective governance and regulatory compliance in AI-related endeavors.
- Create risk frameworks and impact assessments. Determine the risk (and level of risk) associated with each AI system and process, and document how each risk should be addressed, mitigated or resolved. The consequences of such misuse of AI by businesses can be significant. To address such risks, the National Institute of Standards and Technology (NIST) released the initial version of its AI Risk Management Framework in January 2023. It is crucial for companies to prioritize the safety and security of their AI systems and tools by following the guidelines provided by NIST or similar organizations.
- Formulate AI policies and governance structures that embed the principles of transparency and explainability, data privacy protection, and the ethical and accountable handling of datasets within your organization.
The basic advice is this: being proactive and prepared for AI regulations in Canada and around the world in which your business operates and building out your compliance teams should be done now, because the future is already here.