Prompt engineering is an essential skill for legal professionals leveraging AI-powered tools to streamline fact development in litigation, investigations, and arbitration. Whether you're working in our EC:AI suite or another platform, crafting effective prompts can help you generate targeted document summaries, analyze deposition transcripts, and synthesize complex information into actionable insights.
Well-designed prompts empower teams to work faster, smarter, and more consistently. To help you get the most out of generative AI in your practice, here are three best practices for creating effective prompts.
1. Err on the side of simplicity.
Much like people, large language models (LLMs) respond best to prompts that are easy to understand. When crafting a prompt, err on the side of simplicity and concision. Avoid including unnecessary information or anything the model might find irrelevant or confusing. Think about giving instructions to someone on your team. What information might they need to be successful?
A good rule of thumb is to use precise verbs in your prompt. Do you want the model to explain? Interpret? Analyze? Summarize? Taking care with your word choice, particularly with verbs, will enable the prompt to deliver specifically what you are looking for.
Moreover, be explicit about your output format. Be sure to let the model know if you would like your output delivered in the form of a paragraph, bullet points, or a list, or if you have a word limit in mind.
For example, consider this prompt: “I am reading a document and I am involved in this case. I want to know more about what the ruling was. What does the document say?”
A better way of framing this prompt would be: “Summarize the ruling of this document in a concise headline. Keep the summary under 15 words. Focus on the outcome.”
2. Prioritize instructions over constraints.
Most prompts consist of instructions and constraints. An instruction gives the model guidelines on what it should produce. A constraint imposes limitations on the model’s response.
Researchers who focus on LLMs suggest that instructions are more effective than constraints. Whereas instructions explicitly communicate what the user wants, constraints leave the model wondering what it is and is not permitted to do.
When providing a prompt, first try using instructions rather than constraints. If the model misses the mark, edit your prompt to contain more context to see if that helps. As a last resort, use constraints. Constraints can be especially useful when the model has not provided the clarity or specificity you are looking for.
3. Experiment with formats and styles.
Prompt engineering rewards users who are not afraid to experiment. Change up your format, style, and word choice to yield the result you are looking for. Sometimes even minor changes to the prompt can lead to dramatically different outputs.
In some cases, zero shot prompting might work best, in which you provide no examples to the model. Other times, few shot prompting might be a better fit, and the model might require only a few clear examples for guidance. In yet other cases, the user would benefit from system prompting, setting the overall boundaries and instructions for the model.
Here are some examples of prompts that are similar but would produce different outcomes:
- Identify and extract evidence of fraudulent business practices.
- What are 3-5 examples of evidence of fraudulent business practices?
- Whistleblower testimonies highlight a range of suspicious activities within the procurement department. These include...
- What evidence in the company's financial records suggests the possibility of fraudulent business practices?
Ultimately, the best way to check the efficacy of your prompt is to see whether you are getting the outcome you want. If not, feel free to adjust the prompt, provide more context, or both. Try again and iterate as needed.
Keep in mind that LLMs are trained on user-generated data, and as such, they can make mistakes. Always double check your output before moving on to the next task. This added step will go a long way toward ensuring accuracy and efficiency.