MITIGATING BIAS IN AI ALGORITHMS: TECHNIQUES FOR FAIR AND ETHICAL AI SYSTEMS | ||
Aswan Science and Technology Bulletin | ||
Articles in Press, Accepted Manuscript, Available Online from 24 September 2025 PDF (1.02 M) | ||
Document Type: Original Article | ||
DOI: 10.21608/astb.2025.390962.1025 | ||
Authors | ||
IEHAB ABDULJABBAR KAMIL* ; MOHANAD Abdulsallam AL-ASKARI | ||
Department Of Information Systems College Of COMPUTER SCIENCES AND INFORMATION TECHNOLOGY, UNIVERSITY OF ANBAR, IRAQ | ||
Abstract | ||
Fairness and discrimination outcomes in AI algorithms caused by their bias necessitate robust mitigation techniques for establishing ethical AI systems. However, this presence of biases is evaluated as a possible means to overcome it, such as adversarial debiasing, reweighting, or synthetic data generation, at the data collection, data training and model deployment stages. Adversarial debiasing and oversampling also reduce bias at a lower loss, fairness, and cost as measured by EOD and SPD. The development of causal models and fairness-concerned modification further enhances transparency and equity. Nevertheless, there still exist issues of getting trade-offs between applicability and precision, ethical issues related to data application, and computational cost. It is important that AI systems align with societal factors and necessary parts for this are regular inspection, interdisciplinary linkage, and adverse governing process. For future work, scalability needs to be optimized, the dataset must be made more diverse, and XAI must be integrated for accountable decision-making. | ||
Keywords | ||
Bias Mitigation; Machine Learning; Synthetic Data Generation; Oversampling; Debiasing | ||
Statistics Article View: 42 PDF Download: 21 |