For society, this can be problematic. In 2024, UNESCO published research showing that GenAI systems associate women with terms like âhome,â âfamily,â and âchildrenâ four times more frequently than men. Meanwhile, names that sound male are linked to terms like âcareerâ and âexecutive.â EqualVoice used GenAI image generators in 2024 to test for bias. They found a correlation between prompts and stereotypes: âCEO giving a speechâ engendered men 100% of the time and 90% of these were white men. The prompt âbusinesswomanâ yielded images of women that were 100% young and conventionally attractive. Again, 90% of these were also white.
While this is troubling at a societal level, at the level of individual organizations, bias in GenAI can also constitute a serious threat.
Businesses using the technology to help map market segments, design and produce products, and engage with customer bases are at risk of suboptimal risk management and decision-making unless they introduce effective approaches, measures, checks, and balances to mitigate bias. Innovation and growth can be undermined, and opportunities might be missed when organizations fail to integrate diverse user needs, priorities, and perspectives. Brand reputation and loyalty suffer when organizations fail to uphold ethical standards and societal values. And in a world where integration is accelerating, it is only natural that laws and guidelines around the use of GenAI will intensify. Organizations found wanting in the regulatory context are likely to face increasingly stringent financial and operational consequences.