A Survey of Deepfake Text Detection and Security Attack Prediction via Sentiment Analysis. | ||
International Journal of Applied Intelligent Computing and Informatics | ||
Article 2, Volume 1, Issue 2, September 2025, Pages 11-35 PDF (1.62 M) | ||
Document Type: Original Article | ||
DOI: 10.21608/ijaici.2025.358840.1009 | ||
Authors | ||
Norhan Farouk* 1; Sara Sweidan2; Mohamed Taha3 | ||
1Artificial Intelligence Department, Faculty of Computers and Artificial Intelligence, Benha University, Egypt. | ||
2Assistant Professor, Artificial Intelligence Department, Faculty of Computers and Artificial Intelligence, Benha University, Egypt | ||
3Faculty of Computers & Artificial Intelligence, Benha University, Benha, Qalyubia Governorate, Egypt. | ||
Abstract | ||
This survey investigates the relationship between sentiment analysis on social media, security threat prediction, and deepfake text detection. The quick spread of machine-generated material has made it essential to identify harmful content in order to preserve cybersecurity and information integrity. Recent developments in machine learning models are reviewed in the article, with a focus on transformer-based designs like BERT, RoBERTa, and DeBERTa, which have shown exceptional performance in recognizing text produced by artificial intelligence. Particular focus is placed on sentiment analysis as a security threat prediction tool, demonstrating how changes in user sentiment can act as early warning indicators for possible intrusions. The study also looks at a number of detection techniques, emphasizing how well deep learning classifiers like Random Forest, Logistic Regression, and Gradient Boosting differentiate between content created by humans and content created by artificial intelligence. Experimental results show that DeBERTa-based frameworks outperform conventional techniques in identifying malicious intent and recognizing deepfake text with high accuracy. Additionally, this survey highlights important issues in deepfake identification, such as cross-platform analysis, adversarial robustness, and changing AI-generated material. Future research directions emphasize improving model efficiency, reducing false positives, and integrating hybrid approaches to enhance detection accuracy. The results highlight the necessity of ongoing innovation in AI-powered security measures to counter new online dangers. In the end, this work adds to the expanding corpus of research on AI-generated text identification and its consequences for social media integrity, cybersecurity, and misinformation prevention. | ||
Keywords | ||
Sentiment Analysis; Security Attack; Deepfake; Machine-generated Text; Classification | ||
Statistics Article View: 21 PDF Download: 10 |