a hand holding a guitar

Insights

ARTICLE

Fear not the black box: How responsible AI adoption can drive innovation and productivity

AI arms races. The Stargate Project. Billions of dollars of investment. Current headlines about artificial intelligence read like science fiction. But these rapid technological developments (along with significant amounts of hype) are very real, and can be anxiety-inducing for lawyers and in-house counsel trying to understand how AI — and particularly generative AI (GenAI) — will affect their clients, employees, and bottom line.

It is critical for organizations to keep an eye on the latest geopolitical developments arising from new AI tools. Nevertheless, for those just getting a handle on how artificial intelligence may impact their day-to-day operations, a more local perspective is needed.

AI adoption in Canada: Recent numbers

While AI development and deployment in Canada’s private sector remains in its early stages, Statistics Canada reports that over 6 per cent of Canadian businesses used AI in producing goods and services in 2024 (the vast majority of which being in the information management, scientific, and finance industries, respectively). Use cases vary, but most AI tools are currently being deployed for natural language processing, text analytics, and virtual agents or chatbots.

When specifically considering GenAI, the number of Canadian businesses engaging in the early adoption of such tools more than doubles to 14 per cent, according to the Canadian Chamber of Commerce. Not surprisingly, larger businesses are nearly twice as likely than small businesses to implement GenAI. (In a separate survey from Sept. 2024, BDC reported that 27 per cent of Canadian small business owners are using AI without even realizing it.)

As interest and engagement increase at an organizational level, business leaders are rightly becoming concerned about how employees — particularly those who use GenAI for personal activities — may be more likely to engage with these tools in the workplace, albeit without proper policies, training, or guidance.

In Nov. 2024, KPMG reported that 46 per cent of Canadian workers are using GenAI in their jobs, signalling a 116 per cent adoption growth rate since 2023. While this entrepreneurial spirit is commendable, there are significant risks that are coming to light. For example, KPMG’s report found that 24 per cent of users responded that they have uploaded proprietary company data (for instance, HR information, or supply chain statistics) into public GenAI platforms. Another 19 per cent said they have entered private financial information about their company into these public-facing tools.

The benefits arising from a tech-savvy workforce can quickly be eclipsed by the legal and reputational risks arising from employees using AI tools without the appropriate internal guidelines and training programs. These consequences are not insignificant, and can include privacy breaches because of data misuse, regulatory non-compliance, and operational inefficiencies.

Seven steps towards your own successful AI adoption

For in-house counsel and other lawyers, this is a critical moment. As federal governments struggle to regulate AI, establishing comprehensive internal policies becomes imperative for the success of business operations.

Helpfully, there are various frameworks for organizations to benchmark their development and deployment of AI. For example, the ISO/IEC 42001 standard helps mitigate risks associated with AI, including bias, unintended outcomes, and privacy concerns. ISO/IEC 42001 also fosters global harmonization of AI practices, enabling cross-border collaboration and compliance with emerging regulations.

As geopolitical alignments shift in the fight for AI dominance, executives — with guidance from their legal teams — can take local, practical steps to reduce the day-to-day risk of these evolving technologies. They can also support staff in their use of these tools for the benefit of their clients (and their own):

  1. Strengthen board and C-suite AI literacy: Equip the board and C-suite with the knowledge and strategic insight needed to oversee AI initiatives effectively. This includes tailored briefings on AI risks, regulatory developments, and business opportunities, as well as ongoing access to subject-matter experts. Board members should understand not only the ethical and legal implications of AI, but also how it aligns with corporate strategy and risk management. Regular AI workshops and scenario-based training can help leaders make informed decisions and foster responsible AI adoption at the highest levels.
  2. Develop clearly written AI policies:  This includes defining acceptable use cases for GenAI tools, and establishing protocols for inputting data and using AI-generated outputs. It also involves identifying who can use AI, and under what circumstances. Guidelines on ethical AI usage, data security, and intellectual property are also critical to have in place, in addition to ensuring that AI usage aligns with corporate goals and ethical standards.
  3. Invest in employee training and awareness: Provide regular training on the risks and benefits of GenAI, and ensure employees understand the organization's AI policies. If you are not training people for the “age of artificial intelligence,” you are not providing the right training. Education and awareness have additional purpose: with a basic understanding of AI, employees can couple that with their deep subject-matter expertise to help organizations identify the most effective use cases to drive strong business outcomes. In-house counsel and legal resources can also foster interdisciplinary collaboration across business units to address AI challenges and opportunities holistically.
  4. Implement robust oversight mechanisms: Appoint an AI governance committee to oversee compliance and conduct regular audits throughout the AI lifecycle, to ensure adherence to policies and regulations. This is necessary to maintain transparency and accountability. It should also be an agile group, as GenAI decisions often need to be made quickly and require oversight of third-party vendors (that may be involved in the AI’s development or deployment).
  5. Adopt comprehensive risk assessments: Identify, evaluate, and address risks associated with AI, such as bias, security vulnerabilities, and unintended consequences. Systems should also be established to monitor AI performance, accuracy, and compliance with the organization’s governing principles. In addition, similar assessments of all contractual terms and obligations arising from the development or deployment of AI should take place.
  6. Be transparent about AI use: Communicate AI practices clearly to stakeholders to build trust and manage risks.
  7. Integrate AI-specific provisions into contracts and agreements: Ensure that all agreements, including vendor contracts, partnership agreements, and customer terms and conditions, contain AI-specific provisions. These should address issues such as liability for AI-generated decisions, compliance with regulatory frameworks, data ownership and protection, intellectual property rights, and risk allocation.

Step into the future today

AI is here to stay. Organizations that thrive will be those that approach this transformative technology with foresight and responsibility. Solid AI governance will foster a culture of innovation, enabling individuals and teams to experiment, iterate, and bring transformative ideas to market at an unprecedented pace, rewarding your leadership a thousandfold.

If your organization has questions about implementing an AI governance framework, please reach out to the contacts below or any member from BLG’s Cybersecurity, Privacy & Data Protection Group.

Key Contacts