Large language models can be harnessed for efficiency and risks reduced by using these approaches
As the director of legal innovation at Caravel Law, I have been experimenting with various legal tech solutions, like a chatbot that relies on large language models (LLMs) that respond to natural language prompts. If you have used ChatGPT, you are familiar with the need to craft a suitable query to get an answer. With all this prompting practice over the last year, I noticed that my skills have improved, and I want to share some tips I have learned through my work.
Prompting is a crucial skill for lawyers who want to leverage the power of LLMs to enhance their practice. Unlike search engines, LLMs do not simply retrieve information from the internet but rather create new content based on the training data and instructions provided by the user via the prompt. Use clear, specific, and actionable prompts to get the best results from these tools. Lawyers can understand this tool by thinking about the instructions they would give to an articling student or a junior associate to draft a document or a letter. That is the level of specificity you should use with LLMs.
Here are some tips for better prompting:
You know what they say, practice, practice, practice! The more you play with prompts, the more fun you'll have. Try to make it a habit to use LLMs daily, even if you don't need them, and see what they can do for you. Don't worry if you feel like you are wasting time or not getting your desired results. It's normal to struggle at first but trust me; it's worth it. Once you master prompting, you'll be amazed by how much LLMs can boost your productivity. I suggest practising with non-client-related tasks and using free LLMs like ChatGPT or Anthropic Claude (e.g., I experiment with these for marketing tasks). Experimentation with various programs can help you understand what LLMs are better for specific tasks.
If you use the free version of the LLMs, though, check the terms of service and confirm whether the LLM will use your data to train their LLM. If you can pay for a subscription, it could be worthwhile to know your data will not be used in this way.
Think of prompts as sentences that tell the LLM what to do for you (good news: grammar rules do not apply here). Try starting your prompts with an action word, a verb that determines what kind of output you want, like “summarize,” “draft,” or “create,” then give the LLM some information to work with.
The more relevant information you give, the better the output result, but know that more is not necessarily better. The context should be relevant to the prompt you input; irrelevant or superfluous information will confuse the LLM and result in a less ideal output. If you're replying to an email, you could copy and paste it into the LLM to provide context, and you may be surprised at how well it works (but be careful about privacy and confidentiality issues).
Here is an example of a prompt that starts with an action word and then gives some context:
We can type, “Draft a bulletin summarizing the latest developments on global regulations on AI and create a 50-word LinkedIn post”. Notice that I included “50-word post,” which is helpful as it helps limit the size of the output that the LLMs generate.
Next, you can spice up the text output by choosing a tone and explicitly tell the LLM what you want it to sound professional or playful, formal or informal, or in Shakespearean English…
To help the LLM draft appropriate content, you could identify the audience. For example, we could recraft our example query to be “Draft a bulletin summarizing the latest developments on global regulations on AI and create a 50-word LinkedIn post where the audience is lawyers. Use a professional, formal tone for the bulletin and the LinkedIn post.”
Sometimes, LLMs can fabricate or misinterpret things, otherwise referred to as “hallucinations.” By uploading documents in Word or PDF format, the LLM generates its output based on the uploaded document and reduces hallucinations in the output text. This is helpful for legal work where accuracy and reliability are crucial. You can query, search, or draft text from any document.
To improve the quality of an LLM’s output, refine your query after getting the initial response and check for accuracy and relevance. I rarely accept the first result I receive. Depending on which LLM you use, you may have different options for doing this. For example, ChatGPT lets you interact conversationally with the LLM and ask for clarification or feedback. MS Copilot, on the other hand, requires you to edit your query and regenerate the output. You can experiment with different phrasing and see how the LLM responds. This unique feature of generative AI allows you to iterate and get different results quickly. Usually, it is faster and easier to adjust your query and get a better output.
Ensure the quality of the output. LLMs have inherent challenges and limitations, such as generating inaccurate or misleading information, exhibiting biases, or producing inappropriate content. Therefore, lawyers should always review and edit LLM output and confirm its legal validity. LLMs are not a replacement for human expertise but rather a valuable tool that can improve and enrich the legal practice.
Large language models will significantly impact the legal field. By practising effective, prompt crafting, utilizing document uploads for accuracy, refining queries, and diligently verifying outputs, lawyers can dramatically enhance their legal work while addressing the concerns of these tools. You will need to spend time training yourself on the tools before you will be productive. I encourage you to start your journey today.