Artificial Intelligence (AI) and Media Literacy
SIFT Evaluation
The SIFT information presented has been adapted from materials by Mike Caulfield with a CC BY 4.0 This link opens in a new window license.
Evaluating AI Outputs
AI-generated content can be helpful, but it is not always accurate, reliable, or unbiased. Since AI does not “think” or “know” things the way humans do, it can sometimes generate misleading or incorrect information, so it is important to assess AI outputs critically, just as you would when evaluating information from any source. Whether you are using AI for research, writing, or studying, taking the time to verify its responses ensures that you are working with credible and useful information.
Here are three useful strategies for assessing AI-generated content:
- Fact-Checking with Credible Sources: AI tools do not verify facts before generating responses, so it is up to you to confirm accuracy. Cross-check AI-generated information against trusted sources like academic databases, government websites, or reputable news outlets. If an AI response makes a surprising or bold claim, always look for supporting evidence before relying on it.
- Checking for Bias and Perspective: AI learns from the data it is trained on, which means it can sometimes reflect biases or present a narrow perspective. Pay attention to whether AI-generated content seems one-sided, lacks diverse viewpoints, or reinforces stereotypes. If you are researching a complex issue, look at multiple credible sources to get a well-rounded understanding.
- Assessing Clarity and Logic: AI can sometimes produce responses that sound authoritative but may not make logical sense. If something seems off, ask yourself: Does the information follow a clear and logical structure? Does it provide relevant examples? Does it answer my question in a meaningful way? If an AI-generated response feels vague or overly generic, try rewording your prompt or breaking it down into smaller, more specific questions.
Comparing Outputs from AI Models
Due to differences in the way generative AI models are trained, each model will have its own strengths, and you will get unique responses when you use the same prompt in multiple tools. This makes comparing their outputs a useful practice! Some models excel at generating text with deep reasoning, while others are better suited for analyzing data, producing images, coding, or summarizing information. You can experiment with different models to see which ones fit your needs best, and this can help you gain a clearer understanding of AI capabilities. For a brief description of different tools and their capabilities, see the AI Tools page on this guide.
Evaluation Methods
The following resources cover methods you can use to critically evaluate information that you encounter, whether or not you know the information is AI-generated.
-
SIFT MethodSIFT is a source evaluation methodology created by Mike Caulfield, a misinformation researcher. SIFT emphasizes looking at our own biases and contextualizing sources to determine if they are suitable for our needs. Sift is particularly helpful with online sources, news, and social media.
-
CCOW TestCCOW is an acronym for Credentials, Claims, Objectives, and Worldview. This evaluation method is designed to actively guide you in investigating and thoroughly assessing the credibility of information. CCOW was created by Anthony Tardiff from Foley Library at Gonzaga University.
-
The ROBOT Test This link opens in a new windowFrom The LibrAIry: "Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles. We have created a tool you can use when reading about AI applications to help consider the legitimacy of the technology."