With generative artificial intelligence, or gen AI, set to reshape how work is done across industries, will the lawyers of tomorrow entrust more and more of their practice to a machine? Or will the technology enable lawyers to double down on the human skills behind strategic, practical legal advice? Fresh on the heels of becoming the first appointed CAIO at a Canadian law firm, BLG’s Lisa Chamandy shares her views on how gen AI will upend the legal industry—and how it will not.
BLG: Gen AI is on every business agenda worldwide, including in Canada. What’s your perspective on current adoption practices across industries?
Lisa Chamandy: Gen AI has gotten a lot of attention really fast, that’s true, and part of that comes from the shock factor of conversing with machines in a way that used to be exclusive to human interactions. I say something, and you say something back, and I respond to that, and so on: it’s not extraordinary until it’s happening on your screen, and then it is extraordinary, at least for a little while. Until something else comes along, that is, like an ability to “speak” or “see,” for example, and then the initial chatbot features seem more ordinary, already.
But in terms of how gen AI is embedded in day-to-day work, for the moment it is not as extensive as the headlines might have you believe. The tools perform specific tasks, not entire roles; they are not about to, either. So we are talking about point solutions that address narrow use cases, not new computerized colleagues by any stretch.
This means that companies looking to leverage gen AI may want to focus on pain points—“What problems are we trying to solve?”—and on doing new things—“What opportunities are we trying to seize?” And they may find business advantage that way, with augmentation, or with acceleration, potentially.
BLG: How does this compare to the usual technology adoption curve?
Lisa Chamandy: It’s different from anything I have ever seen before. We are reacting not only to what exists now, but also to the speed, trajectory and hype of a phenomenon. Companies need not get swept away by all that. They need to remember that gen AI, like all tech adoption, is simply a new tool to help us do what we do.
An important difference is that with classic tech, we followed a more linear path: we identified a tool with useful functionality, made sure it was inherently safe, then encouraged people to use it.
With gen AI tools, security becomes a different ballgame. They’re inherently riskier. Some of the risks come from the tools themselves. Where is the data stored? Will it be used to train the models? How authoritative is the training dataset? Does the tool hallucinate?
But just as important are the risks that come from how people interact with the tools. Are they used within an employee’s area of expertise? In the right circumstances? Are they being properly prompted? Are the tools well supervised? Are they understood and explained to relevant stakeholders?
So we need to be more nuanced and thoughtful about the question of adoption. Sometimes, more is more. Sometimes, it’s just not. A responsible adoption strategy must account for that.
BLG: What would you recommend to a company that hasn’t yet approached gen AI and doesn’t know where to start?
Lisa Chamandy: A few things may be useful. Gen AI is not “out there,” nor is it something that can be ignored for very long. In fact, this is a great time for Canadian companies to look into it.
First, stage setting is important. Some strategic questions need to be answered in this regard. Who is responsible for gen AI? How can it support your overall business strategy? What policies do you need to have in place? How are you going to educate your teams—in terms of both how they will use it but also how they can contribute to the dialogue about how your company should use it? How will you get your data in order, making it reliable, up to date, complete, etc., from now on? What is your competitive aspiration?
Next, I’d task someone with monitoring developments—in the technology itself, in the legal framework, in the known uses within your specific industry. This includes monitoring the evolving expectations of your own clients, partners, vendors—keeping in mind everyone else is also on their own gen AI trajectory. Workshops are helpful, ensuring a diversity of perspectives. Get some good discussion going with candid, critical thinkers in your company, to consider the potential impact of gen AI on what your company does, how you do it, for whom you do it, how you price for it, and what the impact may be on individual teams, workflows, and your culture.
Once those discussions are kicked off, it’s time to start experimenting with small pilot projects, with tools you think will help solve problems, seize opportunities, and align with your strategy and aspirations. Participants in pilot projects may be given parameters, restrictions and guidelines about the uses that can be made of the tools while still in pilot, and then what happens once they are rolled out.
All along, make sure to mitigate the risks. Mitigating the risks means understanding the evolving legal landscape and best practices within your own industry. It also means addressing risks in relation to the tools themselves, reviewing them for functionality and security, and ensuring you have the right contractual mitigants in place.