As more workers, scholars, researchers, and students use artificial intelligence (AI) to enhance their work, the importance of using effective prompts in their digital search has become very important.

Experts say the difference between an ineffective AI response and a useful one boils down to one factor: how you pose the question.

Erica Newcome, STEM and interdisciplinary research librarian at the University of Miami Libraries, held a Zoom session earlier this month titled “Prompt Like a Pro: Learn How to Write Effective Generative AI Prompts for Your Research.” Newcome based her presentation on the CLEAR method developed by Leo S. Lo, dean of the libraries at the University of Virginia.

Erica Newcome
Erica Newcome

CLEAR stands for five core principles that should be applied to prompts: concise, logical, explicit, adaptive, and reflective.

Newcome said a prompt can be a sentence, a paragraph, or even a whole page with a set of instructions for the AI tool.

“If you provide AI with a clear specific prompt, it will increase the likelihood that your output will be more relevant and reliable,” she said.

The prompts should:  

Be concise: Ensure prompts are clear and exclude excessive language. 

Be logical: Make sure prompts are structured and coherent.

Be explicit: Define output specifications clearly. 

Be adaptive: Be flexible in refining prompts that provide unsuitable outputs. 

Be reflective: Continuously evaluate and improve prompts.

AI is not a real person, so you do not need to write “please” or “thank you” in your query.

Newcome showed examples of prompts. One read:

“Tell me about the French Revolution,” while the other read, “Provide a concise overview of the French Revolution, emphasizing its causes, major events, and consequences.”

Then she asked the participants which was the most explicit. The answer was the second one because it will provide more context and examples.

If your prompt does not give you the answer you are seeking, then you must adapt, said Newcome. Tweaking prompts based on output is necessary.

For instance, if you ask Microsoft Copilot or ChatGPT this question: “Tell me about the kind of birds I would see in Miami?” The answer may be a very simple, general answer.

But if you create a person (who are you, what is your profession) and give more details, the answer may be much more effective.

So, Newcome tweaked the prompt to read: “I am an ornithologist working for the Audubon Society checking for the migration patterns of birds in Miami. What time of year would the greatest migration have to report back to Audubon?”

The answer was detailed, provided names of birds as well as scientific studies that dealt with the topic and offered habitat surveys.  

Once you have an answer from AI, it is important to verify the information, she said.

Newcome said one way to do so is to use the SIFT Method, which provides criteria to evaluate the output of the prompts. SIFT stands for Stop before you read, Investigate the source (of the information), Find better coverage, and Trace claims.

When presented with a website offered by Claude or ChatGPT one needs to ask if the website is trustworthy. If the source is not trustworthy, then type in keywords from the report into Google and see if the outputs are being covered by other trustworthy sources. If they are, that is a good sign, she said.

Checking information provided by AI is crucial since Chatbots can provide wrong information, or hallucinations.   

Another framework is to be adaptive, since AI is changing so fast. What is relevant today may not be relevant a few weeks from now.

You may want to adjust your prompt depending on current events.

So, a prompt outcome may help you form the following prompts you ask.

For instance, Newcome said, which prompt is more adaptive: “Examine the relationship between social media usage and anxiety in adolescents,” or “Discuss the impact of social media on mental health”?

Newcome said the first one is more adaptive because it gives more context to the answer and can be followed by other prompts that ask questions about how adolescents are affected in their school performance or in their socialization.  

Since technology is improving and moving so fast, users should be reflective about their prompts by continuing to reflect on what they have asked and how to improve them, said Newcome.