The SIFT information presented has been adapted from materials by Mike Caulfield with a CC BY 4.0 This link opens in a new window license.
AI-generated content can be helpful, but it is not always accurate, reliable, or unbiased. Since AI does not “think” or “know” things the way humans do, it can sometimes generate misleading or incorrect information, so it is important to assess AI outputs critically, just as you would when evaluating information from any source. Whether you are using AI for research, writing, or studying, taking the time to verify its responses ensures that you are working with credible and useful information.
Here are three useful strategies for assessing AI-generated content:
Due to differences in the way generative AI models are trained, each model will have its own strengths, and you will get unique responses when you use the same prompt in multiple tools. This makes comparing their outputs a useful practice! Some models excel at generating text with deep reasoning, while others are better suited for analyzing data, producing images, coding, or summarizing information. You can experiment with different models to see which ones fit your needs best, and this can help you gain a clearer understanding of AI capabilities. For a brief description of different tools and their capabilities, see the AI Tools page on this guide.
The following resources cover methods you can use to critically evaluate information that you encounter, whether or not you know the information is AI-generated.