The article “Beyond automation” in the June issue of Harvard Business Review has sparked many excellent legal and other responses since its publication, including a summary on the excellent 3 Geeks and a Law Blog.
I was drawn to this article like a bee to honey with its shiny optimism and energizing message for the legal profession. There has been much “the end is nigh” writing on the topic of legal and technology, along with some rather stark future-guessing from the raft of books on the coming robot apocalypse. All of which are greatly useful and important in preparing us for this new age — but also rather wearying.
Thomas H. Davenport (one of the big knowledge management thinkers of our time) and Julia Kirby turn all that doom and gloom into a discussion of augmentation rather than automation.
To imagine work (or legal services in our case) that could be “deepened rather than diminished by a greater use of machines.” Apart from being a lovely alliteration example from the article, it also turns the alarmist look at automation on its head. By restricting our imaginations to how we codify everything that humans currently do, we trap ourselves into an ill-fated race to the bottom. Instead we need “to see smart machines as partners and collaborators in creative problem-solving.”
Just as we partner up with other experts in other fields from around the firm to create deeper excellence for our clients, so it is that we should be leveraging these smart machines. Adding computers to that set of giant’s shoulders upon which we stand.
There has been much research and writing on what it is that computers excel at, as well as what it means to be human. At the University of Oxford, Carl Frey and Michael Osborne wrote a paper in September 2013 entitled, “The Future of Employment: How Susceptible Are Jobs to Computerisation?” They look at the computerization of non-routine, manual, and cognitive tasks. This is the second phase of automation; they assume that the first phase of automating the routine, manual, and cognitive tasks has already happened. They review developments in mobile robotics, as well as artificial intelligence — the smart algorithms, machine learning, and big data that are beginning to encroach on the cognitive or knowledge side of our work.
The authors have identified three main obstacles or “bottlenecks” to full computerization of certain tasks. It is the advances in these fields that will see whether full automation will ever happen for some of our jobs. The three are: perception and manipulation (i.e. manual dexterity), creative intelligence (i.e. originality), and social intelligence (i.e. negotiation or persuasion). With nine variables in total, the authors assign probabilities based on whether a certain job is susceptible to computerization or not.
While I’m not sure these nine variables are perhaps sophisticated enough (or it’s that I disagree on the categorization of some of the jobs), it is an interesting paper on what it is that computers are good at, as well as what they aren’t. If you want to see this data in action then steel yourself for the results and head over to the NPR web site “Will Your Job Be Done By a Machine?” They have loaded up Frey and Osborne’s data so you can pick your industry and role and “the computer” spits out, in percentage terms, how likely it is that your job will be superseded by the advance of the robots.
And yet, just like the HBR article, as well as Marty Neumeier’s Metaskills article (“Five talents for the robotic age”), it has an optimistic, if challenging, edge: how we should “become more human — not more robotic.” How can we use the cognitive heavy-lifting of software to realign the human contribution to our jobs and profession?
Davenport and Kirby reveal five alternate paths that one might pursue, whether we’re at the beginning, middle, or end of our careers. They are to: step up, step aside, step in, step narrowly, and/or step forward. Some of which get us to actively engage in the coming automation of tasks and some of which build exclusively upon those unique human characteristics of creative and emotional intelligence. Whichever path might be right for an individual, the authors agree that: “In an era of innovation, the emphasis has to be on the upside of people. They will always be the source of next-generation ideas and the element of operations that is hardest for competitors to replicate.”
Among all the new articles and books springing up on this topic as I write and the many more that will be written before I even submit this column, is another stand-out summer read: Nicholas Carr’s book, The Glass Cage: Automation and Us. On the one hand, Carr joins the rest as he ratchets up the fear factor for our impending fate — this inexorable “progress” of automation — with an excellent review of the development of Google’s driverless cars. There are a number of steps that have to be achieved to get from the tacit to the explicit and the difference was always what was used to describe the separation between human and computer. The test was that if you could explain to an alien how to do something — the individual steps involved — then it was possible for a computer to do that task, and probably smarter, faster, and cheaper too. However, if you couldn’t explain the steps involved; if it sat outside your conscious mind and relied on muscle memory and a whole host of automatic responses to stimuli, then this was seen as an exclusively human trait.
Driving a car was one of those tacit tasks; that the sheer range of processing that goes on in the brain, the number of stimuli coming in from all directions, and often in split seconds too, could never be broken down into mere lines of code. And that was absolutely the case; right up until Google unveiled a fully functioning prototype of an autonomous car in December 2014.
Suddenly that clear split between what humans do well and what computers do well now seems much less clear-cut. Carr is however clear that: “Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means.”
Carr’s book is an important voice on this topic as it also acts as a warning to all this mindless progress. Some of the pushback we’re seeing in law firms with all this new smart technology, while largely acting in self-interest at the moment, could in fact be a good thing. The more we automate, the less we learn. And the less we know or understand the technology, the less that we are able to correct, adjust, or find alternatives, even when things go wrong.
In focusing on augmentation rather than automation we get to learn how much is enough. Finding the perfect balance in the tasks that we undertake that don’t “anesthetize us” from the active enjoyment we get from such tasks. There is a great example at the end of Carr’s book about a group of architects who, having taken a long, hard look at the technology available to them, “began to resist the technology’s temptations.” They realized that by having a strong understanding of the processes involved in architecture they were able to assign the right resource — whether human or smart machine — to the right set of tasks.
Law firms similarly need to “step in” to understand that decisions about technology are decisions about the ways in which we work and how it can enable us to do our best and most fulfilling work. That’s a much more optimistic message and one we should support, particularly when it comes from innovative legal technology vendors themselves such as Kira, or Neota Logic. Michael Mills recently wrote: “When the facts and the rules are clear and consistent, systems will automate. When the context is uncertain and the judgment required is more subtle, systems will augment.”
It may be that the “skepticism” trait, so highly scored amongst lawyers, is exactly the approach needed; to take a step back and think critically about the automation that’s coming and how we can use machines to do things that deepen and strengthen the services we provide to clients.
Kate Simpson is national director of knowledge management at Bennett Jones LLP, and is responsible for developing the firm’s KM strategy and initiatives. The opinions expressed in this article are her own.