Generative AI is particularly sensitive to prompts. This means that the way you frame a request can impact the type of response you'll receive. Especially when using a tool such as ChatGPT, you need to be intentional about what you're asking for. This can help reduce issues such as misinformation and hallucination. In drafting prompts, the CLEAR framework developed by Leo S. Lo is particularly helpful.
The CLEAR Framework suggests that when working with AI your prompts should be:
Concise: Adding too much information in your prompts can affect what a tool might focus on.
Logical: When writing a prompt, make sure that it follows a logical flow, especially when the prompt involves relationships between concepts or sequences.
Explicit: Asking broad questions can lead to less precise answers. When creating a prompt, include relevant details that can help specify the type of response you want.
Adaptive: When a prompt doesn’t get the response, you were looking for, be willing to adapt. A great thing about generative AI tools is that they often have a memory of previous prompts. You can use this to your advantage by asking follow-up questions or correcting and critiquing the tool’s response.
Reflective: At the end of a prompt, reflect on the tool’s response. What worked? What didn’t work? Taking the time after using a tool to reflect on responses can help you draft better prompts in the future and inform what types of inquiry work best with the tool.
The last component of the CLEAR framework, Reflective, is key to being a responsible user of generative AI tools. Responses should be evaluated for accuracy, precision, and the potential for bias. Below are some guiding questions you can use when evaluating a tool and its response.