At Major Drilling, we see the risk of contracting out all of our AI proficiency
Andrew McLaughlin co-wrote this article with Jody Cairns, the director of IT at Major Drilling Group International Inc.
“There’s an AI app for that” – it’s an inescapable mantra pervading the business world in 2024.
Business leaders’ inboxes are increasingly inundated with unsolicited AI offerings to turbocharge their internal systems and processes. They’re getting cold calls at all hours from third parties pitching AI solutions to solve problems they didn’t even know they had – with “use cases” seemingly under every rock. Given the complexity of this technology and the breakneck pace at which it’s evolving, the instinct may be to capitulate and sign on with external providers for fear of missing the boat. Others might revert to the fetal position while trying to ignore the AI wave altogether. Neither approach is advisable.
Latest News
For businesses with at least some internal IT capacity, the best approach is to start developing in-house solutions now. While these initiatives might bring some valuable benefits and quick wins to your organization, the broader potential for institutional knowledge gain – i.e. becoming AI-proficient – is the deeper underlying value of this approach. It’s this value that companies can’t afford to contract out over the long term.
In our case, as a drilling services provider to the mining sector, there wasn’t an obvious starting point for using generative AI. What was clear was that we just had to get started somewhere. And where that “somewhere” was mattered little – so long as we had a sandbox to experiment and learn.
We ended up landing on contract review as our pilot project. How did we get there?
Knowing that we wanted to jump in and get our feet wet, we established a set of simple criteria that would allow us to identify a worthwhile first problem to tackle with AI.
First, the chosen initiative had to bring tangible value to the organization, such as productivity or safety benefits. Next, we wanted to pick a back-office process where we could easily make mistakes and address them without the risk of external data leakage or contamination of larger systems and deploy these lessons across other use cases. As an initial pilot project, keeping the number of people involved to a minimum was vital so the effort wouldn’t become too unwieldy through the development phase. Finally, the project had to present enough of a challenge that it would require our developers to go beyond a superficial dive. The contract review process hit all these boxes.
Given the privacy and security concerns widely publicized about this technology, we used internally hosted, pre-trained models and APIs from a reputable platform that help ensure compliance with the relevant regulations and standards. The platform also provided transparent and detailed documentation on how the models were trained, what data they used, how they handled sensitive information and best practices. To quickly evaluate the feasibility of the technology, our development team used project templates to start a proof of concept that did not communicate to any external service.
We embarked on this effort knowing we could not automatically rely on the output. Trained in-house counsel would always have to review it (i.e., human in the loop). For those familiar with the infamous case last year involving NY lawyers citing AI-hallucinated jurisprudence – this conclusion should come as no surprise (a similar case was recently reported out of BC).
The first version spit out a marginally useful summary that tracked against our common contract concerns. From there, the legal and IT departments collaborated further in refining the application, which provided more valuable and actionable output through an iterative process. Ultimately, the goal is to have the AI output look like our current in-house legal review, i.e. a marked-up document with track changes embedded in the text and suggestions and comments in the margins. In other words, it would be akin to having a junior lawyer conduct an initial review but complete it in seconds, not hours.
We’re already seeing how the knowledge gained through this experimentation process can be applied to other use cases across various departments, including deployment into our field operations. But maybe more significantly, it's helped us overcome that first daunting challenge of just stepping into the AI field.
Those who want to use AI effectively must evaluate and learn from it, not just adopt it unquestioningly. By testing AI technology, businesses can understand its strengths and limitations and how to use it responsibly and ethically. AI technology constantly changes, so companies must be flexible, adaptable, and ready to face new problems and opportunities. Only through developing this proficiency in-house will businesses be positioned to seize these opportunities and effectively mitigate the potential risks (financial, reputational and otherwise).
Given the ubiquitous and sometimes overwhelming volume of offerings in the market, it’s easy to lose sight of the inherent value of developing in-house AI proficiency. But it’s quickly becoming an expected core component of modern institutional knowledge. And it’s far too important a building block for the future of business to offload it entirely onto external third parties.
Jacques Pinette, Major Drilling’s AI developer, helped inspire this article.