Bias in Text-to-Image Generative AI Models: A Review | ||||
Advanced Sciences and Technology Journal | ||||
Articles in Press, Accepted Manuscript, Available Online from 22 August 2025 | ||||
Document Type: Review Article | ||||
DOI: 10.21608/astj.2025.397290.1078 | ||||
![]() | ||||
Authors | ||||
Andrew Wageh1; Omar Waleed1; Omar Mahmoud1; Rasha S. Aboul-Yazeed ![]() | ||||
1Software Eng. Dept., Faculty of Engineering and technology , Egyptian Chinese university, Cairo, Egypt | ||||
2Software Engineering and Information Technology Department, Faculty of Engineering and Technology, Egyptian Chinese University, Cairo, Egypt | ||||
Abstract | ||||
Generative artificial intelligence (AI) is a double-edged sword. Despite its numerous benefits, it has myriad concerns. Bias in generative image models is an example. In this paper, a review of the most renowned text-to-image models, that are Midjourney, Stable Diffusion, and DALL.E, is conducted to investigate the potential bias in the image generated by these AI images generators. The Preferred Reporting Items for Systematic reviews and Meta-Analyses are utilized to extract the appropriate studies. First, papers were screened, and various inclusion and exclusion criteria were applied to select the most suitable papers to be included in this review. The time is defined between 2020 and July 2025. Men and light-skinned individuals were regularly overrepresented, while women and darker-skinned groups were often missing or stereotyped. As a recommendation, there is an urgent need for the use of unbiased datasets in models’ training, developing more images for various careers to test them, and refining their designs to achieve to achieve equal representation of groups. | ||||
Keywords | ||||
Generative AI; Bias; DALL.E; Midjourney; Stable Diffusion | ||||
Statistics Article View: 3 |
||||