Inkpots and chatbots

Insights

PERSPECTIVE

The future of law: A Q&A with Lisa Chamandy, Chief Artificial Intelligence Officer (CAIO)

With generative artificial intelligence, or gen AI, set to reshape how work is done across industries, will the lawyers of tomorrow entrust more and more of their practice to a machine? Or will the technology enable lawyers to double down on the human skills behind strategic, practical legal advice? Fresh on the heels of becoming the first appointed CAIO at a Canadian law firm, BLG’s Lisa Chamandy shares her views on how gen AI will upend the legal industry—and how it will not.

BLG: Gen AI is on every business agenda worldwide, including in Canada. What’s your perspective on current adoption practices across industries?

Lisa Chamandy: Gen AI has gotten a lot of attention really fast, that’s true, and part of that comes from the shock factor of conversing with machines in a way that used to be exclusive to human interactions. I say something, and you say something back, and I respond to that, and so on: it’s not extraordinary until it’s happening on your screen, and then it is extraordinary, at least for a little while. Until something else comes along, that is, like an ability to “speak” or “see,” for example, and then the initial chatbot features seem more ordinary, already.

But in terms of how gen AI is embedded in day-to-day work, for the moment it is not as extensive as the headlines might have you believe. The tools perform specific tasks, not entire roles; they are not about to, either. So we are talking about point solutions that address narrow use cases, not new computerized colleagues by any stretch. 

This means that companies looking to leverage gen AI may want to focus on pain points—“What problems are we trying to solve?”—and on doing new things—“What opportunities are we trying to seize?” And they may find business advantage that way, with augmentation, or with acceleration, potentially.

Some of the risks come from the tools themselves. Where is the data stored? Will it be used to train the models? How authoritative is the training dataset? Does the tool hallucinate? 

BLG: How does this compare to the usual technology adoption curve?

Lisa Chamandy: It’s different from anything I have ever seen before. We are reacting not only to what exists now, but also to the speed, trajectory and hype of a phenomenon. Companies need not get swept away by all that. They need to remember that gen AI, like all tech adoption, is simply a new tool to help us do what we do.

An important difference is that with classic tech, we followed a more linear path: we identified a tool with useful functionality, made sure it was inherently safe, then encouraged people to use it. 

With gen AI tools, security becomes a different ballgame. They’re inherently riskier. Some of the risks come from the tools themselves. Where is the data stored? Will it be used to train the models? How authoritative is the training dataset? Does the tool hallucinate? 

But just as important are the risks that come from how people interact with the tools. Are they used within an employee’s area of expertise? In the right circumstances? Are they being properly prompted? Are the tools well supervised? Are they understood and explained to relevant stakeholders? 

So we need to be more nuanced and thoughtful about the question of adoption. Sometimes, more is more. Sometimes, it’s just not. A responsible adoption strategy must account for that.

Mitigating the risks means understanding the evolving legal landscape and best practices within your own industry. It also means addressing risks  in relation to the tools themselves, reviewing them for functionality and security, and ensuring you  have the right contractual mitigants in place. 

BLG: What would you recommend to a company that hasn’t yet approached gen AI and doesn’t know where to start?

Lisa Chamandy: A few things may be useful. Gen AI is not “out there,” nor is it something that can be ignored for very long. In fact, this is a great time for Canadian companies to look into it.

First, stage setting is important. Some strategic questions need to be answered in this regard. Who is responsible for gen AI? How can it support your overall business strategy? What policies do you need to have in place? How are you going to educate your teams—in terms of both how they will use it but also how they can contribute to the dialogue about how your company should use it? How will you get your data in order, making it reliable, up to date, complete, etc., from now on? What is your competitive aspiration?

Next, I’d task someone with monitoring developments—in the technology itself, in the legal framework, in the known uses within your specific industry. This includes monitoring the evolving expectations of your own clients, partners, vendors—keeping in mind everyone else is also on their own gen AI trajectory. Workshops are helpful, ensuring a diversity of perspectives. Get some good discussion going with candid, critical thinkers in your company, to consider the potential impact of gen AI on what your company does, how you do it, for whom you do it, how you price for it, and what the impact may be on individual teams, workflows, and your culture. 

Once those discussions are kicked off, it’s time to start experimenting with small pilot projects, with tools you think will help solve problems, seize opportunities, and align with your strategy and aspirations. Participants in pilot projects may be given parameters, restrictions and guidelines about the uses that can be made of the tools while still in pilot, and then what happens once they are rolled out.

All along, make sure to mitigate the risks. Mitigating the risks means understanding the evolving legal landscape and best practices within your own industry. It also means addressing risks in relation to the tools themselves, reviewing them for functionality and security, and ensuring you have the right contractual mitigants in place.

 

The future of law

 

Like what you're reading?

 

Fact, opinion and prophesy. What’s next for law, from the brightest in the business.

 

BLG: As the first Chief Artificial Intelligence Officer at a Canadian law firm, what is your top priority for BLG?

Lisa Chamandy: My priority is to embed gen AI into our core business responsibly and purposefully, while considering the impact on our clients and firm in a holistic manner. This isn’t about gen AI being cool, new, shiny, or off to the side: it’s about using the right tools precisely as tools, in the right contexts, to the right extent, and optimally, to augment what we do, which is to serve clients, to practise law.

Clients are navigating an increasingly AI-enabled economy; this presents novel considerations for their businesses as a whole. 

BLG: How do you think gen AI will generate value for clients, as far as the legal industry is concerned? 

Lisa Chamandy: In the shorter term, the value will come largely from the dialogue itself. Clients are navigating an increasingly AI-enabled economy; this presents novel considerations for their businesses as a whole. Clients can look to their own use cases—be they internal, in relation to their own goods and services, in relation to their stakeholders, or in terms of how they do business with third-party partners. The specifics may be different from one industry to the next, from one company to the next, even from one team to the next within a same company, but the core questions, like how can this best support our core business, how can we avoid the pitfalls, that’s probably universal at this point, and there’s value in the dialogue about these. 

BLG: How would you describe the lawyer of tomorrow? What skills and training do they need to be successful?

Lisa Chamandy: In the longer term, as gen AI is embedded in daily practice, I think that great lawyers will be doubling down on the uniquely human elements that already make them great: creativity, critical thinking, judgment, being able to find practical solutions to business problems and to deliver strategic advice. In short, the things that client relationships are built on. 

With that comes how we train lawyers, and this will likely evolve to take into account what the tools might provide as outputs. Lawyers need to supervise tools and obviously remain responsible for their work; this is in a context where they may use the tools and get an output that may be convincingly written, well structured, etc., but also just wrong. The lawyer of tomorrow will need to spot that without the benefit of years of experience, sometimes. 

Also, the law itself is evolving with novel legal questions that AI entails and that cut across different substantive areas of law, maybe some substantive technology areas, too. It is fascinating to see legal frameworks continue to evolve, just as it will be to see laws and rules get interpreted accordingly by decision makers.

BLG: Our firm recently turned 200. How do you picture Canadian law in the future?

Lisa Chamandy: I think there will be a blurring of all sorts of distinctions. A distinction between a legal issue and a business issue, between a good and a service, as well as between different substantive areas of law. This is already true in some ways, but I think it will continue to progress. In terms of work product, we may wonder what's what. Who or what generated that specific piece of it? It may not always be super clear, and I don't think it's necessarily going to matter so much. People remain responsible for their work, for their human qualities and their human connections, and I think that in time, the contribution of gen AI to the practice will sort itself out. The law will adjust itself like it always has.


BLG's Future of Law series captures the perspectives of industry leaders on the biggest issues facing law and business over the next decade and beyond with the goal of starting conversations and supporting action in organizations across Canada. The year-long series was created in honour of BLG’s 200th anniversary in 2023-2024.