Breakstone, J., Smith, M., Wineburg, S., Rapaport, A., Carle, J., Garland, M., & Saavedra, A. (2021). Students’ Civic Online Reasoning: A National Portrait. Educational Researcher, 50(8), 505–515. doi: 10.3102/0013189X211017495
Note that this paper was published in 2021.
“Asked to investigate a site claiming to “disseminate factual reports” on climate science, 96% never learned about the organization’s ties to the fossil fuel industry. Two thirds were unable to distinguish news stories from ads on a popular website’s home page. More than half believed that an anonymously posted Facebook video, shot in Russia, provided “strong evidence” of U.S. voter fraud. Instead of investigating the organization or group behind a site, students were often duped by weak signs of credibility: a website’s “look,” its top-level domain, the content on its About page, and the sheer quantity of information it provided.”
Students are not going to fact-check the responses from AI because they don’t fact-check other sources either.
Personally, I’m not sure that this changes anything. Just like there are sources of information that are more reliable / credible / accurate, so there will be generative AI systems that are more reliable / credible / accurate. Part of AI literacy will be to try and get students to choose appropriate sources.
However, I’m not hopeful that this will be effective. People (including teachers) tend to choose information sources that align with their ideologies and belief systems. Now they’ll just choose AI systems that align with their existing biases and prejudices.
This is not an AI problem. This is a being-human problem.