Artificial Intelligence (AI) and Media Literacy
Identifying Bias in Technology
The infographic below discusses some of the ways in which human bias can affect various forms of technology.
Artificial Intelligence (AI) Concerns
AI offers many benefits, but it also raises important challenges that you, as a responsible AI user, must navigate. Understanding issues like copyright, misinformation, ethics, and bias helps you ensure that you use AI responsibly and minimize its potential risks.
Ethics
AI introduces a range of ethical concerns, from privacy risks to misinformation. AI-powered tools often collect and process user data, and this raises questions about how personal information is stored and used. Additionally, AI-generated deepfakes and misleading content can manipulate public perception and fuel misinformation. AI data centers require massive computation power, meaning that AI training alone has a significant environmental impact, and the potential for AI to replace human jobs is also an ethical consideration. Responsible AI use means understanding these risks and making informed decisions about when and how to use AI tools.
To engage with AI ethically:
- Protect your privacy: Be mindful of the data you input into AI systems, especially personal or sensitive information. Follow institutional guidelines, where applicable.
- Be cautious with AI-generated media: Deepfakes and altered content can be deceptive, so always verify sources before sharing AI-generated images or videos.
- Stay informed on AI ethics: Keep up with discussions on AI’s social, environmental, and economic impact.
Copyright
AI-generated content raises important copyright questions because AI tools create text, images, and other media based on existing data, but they do not “own” their outputs. In many cases, AI-generated work does not qualify for copyright protection since it lacks human authorship. Additionally, some AI tools may pull from copyrighted materials in ways that blur the lines of fair use, potentially leading to legal and ethical concerns. Be aware that using AI-generated content without proper attribution could result in plagiarism, and some institutions or publishers may have strict policies on AI-assisted work.
To avoid copyright issues when using AI-generated content:
- Always attribute sources: If AI generates content based on existing materials, cite or acknowledge sources where appropriate.
- Verify ownership: AI cannot determine whether its outputs are original or derived from copyrighted works, so be cautious when using AI-generated text, images, or code.
- Use AI as a tool, not a replacement: Rather than copying AI-generated content, use it to brainstorm, outline, or refine your own original work.
Hallucinations
AI hallucinations occur when an AI model generates inaccurate, misleading, or completely false information. This happens because AI does not fact-check itself, it simply produces responses based on statistical patterns rather than verified knowledge. AI tools may fabricate sources, misrepresent facts, or confidently present incorrect information, so it is essential to verify outputs before relying on them for research or decision-making.
To prevent the spread of AI-generated misinformation:
- Cross-check with reliable sources: Compare AI-generated information with trusted references, such as academic databases, government websites, or reputable news outlets.
- Be skeptical of overly confident or specific claims: If something seems too precise or surprising, investigate further before accepting it as fact.
- Use AI as a starting point, not an endpoint: Treat AI-generated content as a draft or brainstorming tool, not a definitive source of truth.
Bias
AI models are trained on vast amounts of human-created data, which means they can inherit and amplify biases that are already present in that data. This can result in AI-generated content that reflects gender, racial, or cultural biases, sometimes reinforcing stereotypes or providing incomplete perspectives. Bias in AI can affect hiring decisions, criminal justice systems, and everyday search result. Recognizing and mitigating AI bias is essential for ensuring fair and equitable outcomes in both academic and professional settings.
To minimize the impact of AI bias:
- Be aware of potential bias: Recognize that AI-generated content is not always neutral and may reflect societal prejudices.
- Use diverse sources: Cross-reference AI outputs with information from multiple perspectives to get a more balanced view.
- Critically analyze AI-generated responses: Question whether AI’s output is fair, inclusive, and representative of different viewpoints.
Generative AI and Human Bias
The video below discusses how generative AI programs can reproduce human biases, as well as some methods to implement in evaluating AI content.
References and Further Readings
- Agree to Disagree: Will Artificial Intelligence Do More Harm Than Good? A Debate [Video] This link opens in a new windowAgree to disagree: Will artificial intelligence do more harm than good? a debate. (2022). In Films On Demand. Films Media Group. https://fod.infobase.com/PortalPlaylists.aspx?wID=105049&xtid=283503
- Beyond Probabilities: Unveiling the Delicate Dance of Large Language Models (LLMs) and AI-Hallucination This link opens in a new windowHamid, O. H. (2024). Beyond Probabilities: Unveiling the Delicate Dance of Large Language Models (LLMs) and AI-Hallucination. 2024 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Cognitive and Computational Aspects of Situation Management (CogSIMA), 2024 IEEE Conference On, 85–90. https://doi.org/10.1109/CogSIMA61085.2024.10553755
- Copyright in the Age of Artificial Intelligence: Unravelling the Complexities For the Protection of AI-Generated Work This link opens in a new windowHoshiar, S., & Kiran, S. (2024). Copyright in the Age of Artificial Intelligence: Unravelling the Complexities For the Protection of AI-Generated Work. 2024 ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K), ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K), 2024, 1–7. https://doi.org/10.23919/ITUK62727.2024.10772817
- Ethics in the Age of AI: A Conversation with Reid Blackman This link opens in a new windowBlackman, R., & Euchner, J. (2024). Ethics in the Age of AI: A Conversation with Reid Blackman. Research Technology Management, 67(1), 15–21. https://doi.org/10.1080/08956308.2024.2280481
- The human costs of data-driven AI This link opens in a new windowBrown, A. (2023). The human costs of data-driven AI. Digital Leaders. https://digileaders.com/the-human-costs-of-data-driven-ai/
- Inside the AI Factory: the humans that make tech seem human This link opens in a new windowDzieza, J. (2023). Inside the AI Factory. The Verge. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots
- Love the Way You Lie: Unmasking the Deceptions of LLMs This link opens in a new windowKumar, Y., Gordon, Z., Morreale, P., Li, J. J., & Hannon, B. (2023). Love the Way You Lie: Unmasking the Deceptions of LLMs. 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security Companion (QRS-C), Software Quality, Reliability, and Security Companion (QRS-C), 2023 IEEE 23rd International Conference on, QRS-C, 875–876. https://doi.org/10.1109/QRS-C60940.2023.00049
- The mounting human and environmental costs of generative AI This link opens in a new windowLuccioni, S. (2023). The mounting human and environmental costs of generative AI. Ars Technica. https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/
- Stable Bias: Analyzing Societal Representations in Diffusion Models This link opens in a new windowStable Bias: Executive Summary. (2025). Huggingface.co. https://huggingface.co/spaces/stable-bias/stable-bias