Optimization Accuracy of Fake News Detection in social media using Multimodal Learning
DOI:
https://doi.org/10.69968/ijisem.2025v4i3281-287Keywords:
Fake news detection, BERT, Natural Language Processing, Deep learning, Text classification, Misinformation, Social media analytics, Contextual embeddingsAbstract
The spread of false information in the digital age is a big problem because it changes how people think arounds, how politics works and how people behave on social media and the internet today. Standard detection methods that can rely on feature engineering, language indicators, or metadata often have problems with scalability and generalisability. Recent developments on natural language processing, such as Bidirectional Encoder Representations from Transformers (BERT), give a more effective method by summarize deep contextual and semantic relationships within text. In this study, BERT-based model is created to identify fake news using a dataset of news articles. The model was too though number of steps like as tokenization, embedding, and fine-tuning so it could learn patterns that distinguish between real and fake news. To measure its performance, by accuracy, precision, recall, F1-score, and AUC-ROC, and then compared the results with traditional machine learning methods. The results showed that BERT worked better than older models because it could capture subtle patterns in language, which improved detection accuracy. This also suggests that deep learning models perform more reliably when trained on large datasets. In this study, BERT proved to be an effective and practical method for detecting fake news. The work also sets a strong base for future research in automated misinformation detection.
References
[1] J. Fetzer, Disinformation: The Use of False Information, Springer, 2004.https://doi.org/10.1023/B:MIND.0000021683.28604.5b
[2] C. Wardle and H. Derakhshan, "Information disorder: Toward an interdisciplinary framework for research and policymaking," Council of Europe Report, 2017.
[3] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, "Fake news detection on social media: A data mining perspective," ACM SIGKDD Explorations Newsletter, vol. 19, no. 1, pp. 22-36, 2017.https://doi.org/10.1145/3137597.3137600
[4] S. Vosoughi, D. Roy, and S. Aral, "The spread of true and false news online," Science, vol. 359, no. 6380, pp. 1146-1151, 2018.https://doi.org/10.1126/science.aap9559
[5] V. Lazet, F. A. Farajtabar, and E. Cox, "Fake news: Characterization and detection," Proceedings of the 12th International Conference on Web and Social Media (ICWSM), pp. 280-289, 2019.
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," in Proc. of NAACL-HLT, pp. 4171-4186, 2019.
[7] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, "Fake news detection on social media: A data mining perspective," ACM SIGKDD Explorations, vol. 19, no. 1, pp. 22-36, 2017.https://doi.org/10.1145/3137597.3137600
[8] C. Castillo, M. Mendoza, and B. Poblete, "Information credibility on Twitter," in Proc. 20th Int. Conf. World Wide Web (WWW'11), 2011, pp. 675-684.https://doi.org/10.1145/1963405.1963500
[9] V. Pérez-Rosas, B. Kleinberg, A. Lefevre, and R. Mihalcea, "Automatic detection of fake news," in Proc. 27th Int. Conf. Computational Linguistics (COLING'18), 2018, pp. 3391-3401.
[10] A. Sharma, F. Qian, H. Jiang, Z. Ruchansky, M. Zhang, and Y. Liu, "Combating fake news: A survey on identification and mitigation techniques," ACM Trans. Intell. Syst. Technol., vol. 10, no. 3, pp. 1-42, 2019.https://doi.org/10.1145/3305260
[11] W. Y. Wang, "Liar, liar pants on fire: A new benchmark dataset for fake news detection," in Proc. 55th Annu. Meeting of the Assoc. for Computational Linguistics (ACL'17), 2017, pp. 422-426.https://doi.org/10.18653/v1/P17-2067
[12] R. Kaliyar, A. Goswami, and P. Narang, "FakeBERT: Fake news detection in social media with a BERT-based deep learning approach," Multimedia Tools and Applications, vol. 80, no. 8, pp. 13467-13483, 2021.https://doi.org/10.1007/s11042-020-10183-2
[13] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," in Proc. NAACL-HLT'19, 2019, pp. 4171-4186.
[14] C. Sun, X. Qiu, Y. Xu, and X. Huang, "How to fine-tune BERT for text classification?" in China National Conf. Chinese Computational Linguistics (CCL'19), 2020, pp. 194-206.https://doi.org/10.1007/978-3-030-32381-3_16
[15] Y. Liu et al., "RoBERTa: A robustly optimized BERT pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
[16] Z. Lan et al., "ALBERT: A lite BERT for self-supervised learning of language representations," in Proc. ICLR'20, 2020.
[17] V. Sanh, L. Debut, J. Chaumond, and T. Wolf, "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter," arXiv preprint arXiv:1910.01108, 2019.
[18] C. Sun, H. Yang, Z. Sun, and K. Wang, "Fine-tuning pre-trained BERT for fake news detection," Neural Computing and Applications, vol. 32, no. 23, pp. 17314-17325, 2020.
[19] N. Ruchansky, S. Seo, and Y. Liu, "CSI: A hybrid deep model for fake news detection," in Proc. 2017 ACM Conf. Information and Knowledge Management (CIKM'17), 2017, pp. 797-806.https://doi.org/10.1145/3132847.3132877
[20] K. Shu, A. Mahudeswaran, and H. Liu, "FakeNewsNet: A data repository with news content, social context and dynamic information for studying fake news on social media," arXiv preprint arXiv:1809.01286, 2018.
[21] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008). Curran Associates, Inc.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Priyal Verma, Nagesh Salimath

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Re-users must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. This license allows for redistribution, commercial and non-commercial, as long as the original work is properly credited.