Understanding the two faces of Legal technology and AI

I am a strong advocate for the adoption of legal technology and artificial intelligence. Not only do I believe that the adoption of legal tech will make lawyers more efficient and able to provide better more affordable services to clients, but I believe there will be a time, very soon, when not using legal technology tools will risk a lawyer being found to be in a possible breach of their professional obligations.

Fernando Garcia

I am a strong advocate for the adoption of legal technology and artificial intelligence. Not only do I believe that the adoption of legal tech will make lawyers more efficient and able to provide better more affordable services to clients, but I believe there will be a time, very soon, when not using legal technology tools will risk a lawyer being found to be in a possible breach of their professional obligations.

For example, for lawyers in Ontario, Rule 4.1-1 of the Rules of Professional Conduct state that “a lawyer shall make legal services available to the public in an efficient and convenient way.” That being said, we must also understand that, like any other tool, there are inherent risks with the dependence or over-dependence on these.

Legal technology, like human beings, has the potential to be influenced by bias, moral judgment and to reach results that can have adverse impact on clients, the public or society. The reason for this is quite simple: Legal tech is not immune from the concept that garbage in results in garbage out. Legal tech is created and implemented by humans, so the tool reflects the biases and moral judgements that influence all humans. Author Buster Benson found that there are more than 180 human biases that arise from four main factors:

  1. There is so much information bombarding us that we take shortcuts to reduce the noise.
  2. Lack of meaning is confusing, so we fill in the gaps. A signal becomes a story.
  3. We need to act fast, so we jump to conclusions. Stories become decisions.
  4. We don’t remember everything we take in, so there are distortions in retrieving memories and thoughts.

These same biases will be exhibited by the resulting legal technology.

A good example of this is the autonomous vehicle. While the directives may be as simple as “do not injure passengers, preserve the assets and do not endanger the public and the property of others,” there will come a time when difficult decisions need to be made and that is when the biases and moral judgments made by the developers and the technology will creep in. A great example of this is the Moral Machines site developed by MIT. In that site, you are an autonomous vehicle and are placed in a situation when you must either continue your path and kill a pedestrian or drive into a barrier and kill the occupant. In the various changing scenarios, you are required to choose between killing an animal over a person, a child over an older person, a pregnant versus a non-pregnant person and  a homeless person or a family. While grim, it poignantly demonstrates the point that even apparently benign technological tools will unavoidably bring with them the need to make moral judgments and biases.

Like the autonomous vehicle, in applying legal-tech tools, we must make sure that we do not necessarily take these tools and their output at face value. Rather, it is our obligation to use the tool, but also to be able to understand, offset and account for the biases and limitations of the technology. Returning to the autonomous vehicle analogy, some manufacturers have pledged zero passenger fatalities in their vehicles. This takes away the discretion in making a tough decision from the driver, but it could create a liability and a relinquishment of one’s responsibility to the manufacturer or developer. Rather, most manufacturers will, in seeking to avoid being placed in a moral dilemma, ultimately revert back to the driver’s decision. The driver will be required to always maintain some contact with the steering wheel and will be called, at an instant of notice, to make that tough call. So, too, will a lawyer not be able to blindly adopt these tools; there must still remain an element of human judgment and discretion.  

Another limitation of legal tech is that its developers do not want to share the secret sauce (algorithms) with regard to how decisions are made. This is the key and competitive element that they are commercializing, so they seek to protect their investment. The problem is that by not making the decision-making process open to criticism and evaluation, there is a risk that these limitations cannot be addressed or overcome. Unless regulations force this secret sauce to be shared and protected or there is a concerted initiative by them to do so, it will be left to the lawyer’s discretion as to how legal tech and its output will be applied and how much they can be relied upon.

AI and legal technology will help standardize routine transactions and will provide lawyers with extensive and immediate access to information. By lowering the costs of providing legal advice, it could also be applied to effectively address the access-to-justice challenges we face today. For these reasons, I agree with and will continue to advocate for its application.

However, legal tech should never and will never fully replace lawyers. It is a tool to assist and to provide more efficient services, but it is the legal responsibilities owed by lawyers to their clients and society that will help overcome the bias inherent in any technology. We should never take our hands completely off the steering wheel.