Bibliographie REC

Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical challenges and
solutions of generative AI: An interdisciplinary perspective. Informatics, 11(3), Article 58.
https://doi.org/10.3390/informatics11030058

Augenstein, I., Baldwin, T., Cha, M., Chakraborty, T., Ciampaglia, G. L., Corney, D., DiResta, R.,
Ferrara, E., Hale, S., Halevy, A., Hovy, E., Ji, H., Menczer, F., Miguez, R., Nakov, P.,
Scheufele, D., Sharma, S., & Zagni, G. (2024). Factuality challenges in the era of large
language models and opportunities for fact-checking. Nature Machine Intelligence, 6(8),
852–863. https://doi.org/10.1038/s42256-024-00881-z

Boediman, E. P. (2025). Exploring the impact of deepfake technology on public trust and media
manipulation: A scoping review. Jurnal Komunikasi, 19(2), 313–334.
https://doi.org/10.20885/komunikasi.vol19.iss2.art8

Boissin, E., Costello, T. H., Spinoza-Martín, D., Rand, D. G., & Pennycook, G. (2025). Dialogues
with large language models reduce conspiracy beliefs even when the AI is perceived as human.
PNAS Nexus, 4(11), Article pgaf325. https://doi.org/10.1093/pnasnexus/pgaf325

Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation.
Data & Policy, 3, Article e32. https://doi.org/10.1017/dap.2021.20

Consensus. (2026, 18 avril). AI’s impact on misinformation, disinformation, fake news, and
conspiracy theories [Synthèse de littérature générée par IA]. Consensus.
https://consensus.app

Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through
dialogues with AI. Science, 385(6714), Article eadq1814.
https://doi.org/10.1126/science.adq1814

Lundberg, E., & Mozelius, P. (2025). The potential effects of deepfakes on news media and
entertainment. AI & Society, 40, 2159–2170. https://doi.org/10.1007/s00146-024-02072-1

Odunlami, O. A., & Banjo, O. A. (2025). Deepfakes and the crisis of trust: Public perception of
media authenticity in the age of synthetic content. Nigerian Journal for Technical
Education, 24(2).

Pilati, F., & Venturini, T. (2025). The use of artificial intelligence in counter-disinformation:
A world wide (web) mapping. Frontiers in Political Science, 7, Article 1517726.
https://doi.org/10.3389/fpos.2025.1517726

Romanishyn, A., Malytska, O., & Goncharuk, V. (2025). AI-driven disinformation: Policy
recommendations for democratic resilience. Frontiers in Artificial Intelligence, 8,
Article 1569115. https://doi.org/10.3389/frai.2025.1569115

Saeidnia, H. R., Hosseini, E., Lund, B., Alipour Tehrani, M., Zaker, S., & Molaei, S. (2025).
Artificial intelligence in the battle against disinformation and misinformation: A systematic
review of challenges and approaches. Knowledge and Information Systems, 67, 3139–3158.
https://doi.org/10.1007/s10115-024-02337-7

Shah, S. B., Thapa, S., Acharya, A., Rauniyar, K., Poudel, S., Jain, S., Masood, A., & Naseem, U.
(2024). Navigating the web of disinformation and misinformation: Large language models as
double-edged swords. IEEE Access, 12, 169262–169282.
https://doi.org/10.1109/ACCESS.2024.3406644

Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, misinformation, and
disinformation in the era of frontier AI, generative AI, and large AI models. 2023 IEEE
International Conference on Computer Applications (ICCA), 1–7.
https://doi.org/10.48550/arXiv.2311.17394

Simchon, A., Edwards, M., & Lewandowsky, S. (2024). The persuasive effects of political
microtargeting in the age of generative artificial intelligence. PNAS Nexus, 3(2),
Article pgae035. https://doi.org/10.1093/pnasnexus/pgae035

Williams-Ceci, S., Jakesch, M., Bhat, A., Kadoma, K., Zalmanson, L., & Naaman, M. (2026). Biased
AI writing assistants shift users’ attitudes on societal issues. Science Advances, 12,
Article eadw5578. https://doi.org/10.1126/sciadv.adw5578