On September 20, 2019 I had the pleasure of attending a roundtable session on legal artificial intelligence held by the International Legal Technology Association (ILTA) at the offices of Blake, Cassels & Graydon in Toronto. The panel was made up of Dera Nevin, Adjunct Professor at the University of Toronto Faculty of Law; Ivo Nikolov, Director IT at Davies Ward Phillips & Vineberg; and Nikki Shaver, Director, Innovation and Knowledge at Paul Hastings. The session covered issues with artificial intelligence platforms developed for the legal industry, what has been learned so far in this area of technology, and the newly released Legal AI Efficacy Report, an analysis of 48 AI-powered legal technology tools.

The discussion focused on practical tips on how to approach legal AI within your firm. Everything from developing AI strategies (don’t put the cart before the horse, the business problem needs to come first) to planning and organizing a legal AI project (know your data and run projects in parallel) to watching for pitfalls (need strong information governance and supervision) to a humbling conclusion that our AI is not the big ugly monster (or robot?) that Stanley Kubrick predicted in 2001: A Space Odyssey. Our AI is “weak” AI, and it is in no way stealing anyone’s job, at least for now. It is a tool, to be used to “augment” and “enhance” and generally make more efficient, the routine tasks that bog legal staff down daily, so that they have time to focus on more complex and important legal work. AI also happens to be a buzzword, and, as Ivo Nikolov pointed out, 20 years ago we would have just been discussing “software” rather than “AI-powered software”.

The biggest takeaway, something that those of us currently working with legal technology probably already know, is that the software is generally not “out of the box”. It requires extensive training and supervision. You cannot snap a finger and get immediate, perfect results. There is still a lot of knowledge work that needs to be invested. Not to mention adapting the technology for Canadian law, as so often the platforms are developed and trained elsewhere. So, if you are considering embarking on a legal AI project, or talking to vendors about their sparkly new legal AI software, remember:

  1. Any AI product is going to take approx. 30-60 “examples” to be properly trained
  2. It will always take double the training time so think about resources
  3. The training is going to have to be maintained on an ongoing basis, again, resources
  4. Assemble a team of patient, tech-savvy, linguistic-oriented people
  5. Normalize your data structure, naming conventions, date formats, etc.
  6. Qualitative bias inherent in your data will influence the training and accelerate and amplify a trend you may or may not want – so make sure it’s consistent and neutral
  7. Always set requirements and metrics in advance to measure success, otherwise how do you know it’s working?
  8. Do not run old projects to “test” the AI, you could run into ethical issues if it finds something new
  9. Lawyers need to trust the technology but will require a deep understanding of how it operates
  10. It’s not just about performance of the product, it’s about performance of humans

Lastly, when considering buying legal AI technology, ask the hard questions. The vendors will give you their pitch and present a pretty picture of the best-case scenario in the demo, but it’s your job to make sure the product will solve your particular business problem. Ask about security: look at their website for white papers. Ask about the data: where does it reside and who sees it? Ask about the algorithm: who trained it and where? Ask about the technology: what does it use? Is it the right tool for the job? And finally remember that all the usual rules of buying technology apply.

-Cecilia Rose, Legal Technology Applications Specialist, Stikeman Elliott LLP

Coming Soon:

Member Profile on Yasmin Khan, Manager, Legal Information Services (Law Library), Ontario Ministry of the Attorney General