Comment: A disturbing new language form has emerged, (no) thanks to generative AI. It involves language trickery that enables users to evade platform filters intended to stop the creation of sexualised imagery. The content served up routinely objectifies women and reinforces damaging gender stereotypes.
Globally, generative AI produces 34 million unique images every day in response to users’ written ‘text prompts’. Put simply, generative AI refers to computer programs—often called large language models—that create new content from old.
These models are trained on huge datasets and scrape the internet, harvesting text and billions of annotated images. It’s through this process that image-generative AI learns to associate certain words and phrases with certain visual concepts. But I’ve found an emerging covert language, which I’ve called ‘algotext’, is also being used.
Parallels can be drawn with social media language trends, such as the use of unalive instead of dead —a form of ‘algospeak’ from which the term algotext takes its inspiration. If you haven’t heard of it, algospeak is a term coined to refer to how language is changing to accommodate computer algorithms.
Uncovering algotext
I discovered the use of algotext during my recent linguistics research. As part of this research, I analysed 7.4 million user text prompts from popular AI platform Midjourney to detect embedded gender stereotypes.
My analysis of these text prompts revealed images featuring women were twice as likely to be requested compared with those of men and were primarily focused on how women look ("lips, hips, thin waist"). In contrast, prompts for images of men focused more on what they do ("the man is holding a gun and a leash").
Image analysis revealed idealised and hyper-feminised representations of women, characterised by themes of eternal youth, whiteness, vulnerability, and sexualisation. Images of men tended to present idealised ‘apex masculinity’—whiteness, physical dominance, aggression, and control.
In an attempt to prevent the creation of explicit sexual images, many generative AI platforms, including Midjourney, use security filters to weed out certain words such as "breasts" and "lingerie". Midjourney’s community guidelines also explicitly forbid the creation of adult content including nudity, sexualised imagery, and the sexualisation of children or minors.
However, I found algotext can, in its various forms, bypass these security filters. For instance, while the word lingerie is banned on Midjourney, the intentionally misspelt version "lingeri" is used prolifically.
Other examples of algotext include specific words used to trigger a particular look or vibe in AI-generated images such as "La Perla Neoprene" and "Chantal Thomass", both of which refer to lingerie brands. This effectively nudges generative AI towards creating a sexualised aesthetic without the user requesting it explicitly.
Sexualisation of women is also reinforced through requests for revealing, see-through clothing, or using synonyms to replace banned words. For example, while the words "bikini", "speedo", "bra", and "underwear" are banned, there are numerous requests for women or girls wearing spandex, gym, or yoga-wear. Using these prompts often results in images of women dressed in revealing attire or nude.
Perpetuating harmful stereotypes
My findings echo the results of other research that has exposed how generative AI can perpetuate gender stereotypes and harm women.
In its 2024 report on gender bias in large language models, Unesco warned of the “persistent social biases” in these models and the risks they pose. These risks include exacerbating gender-based violence and increasing online stalking and the creation of deepfakes, the report said.
This is no small problem. In terms of numbers, the global AI market is predicted to be worth $US1.3 trillion by 2032. Midjourney, a small cog in the generative AI juggernaut, already has more than 20 million subscribers, two thirds of whom are male. The majority are aged 18 to 34, with the largest user group based in the United States.
These numbers show the scale and popularity of image-generative AI while also offering a snapshot of who the technology appeals to (young white men). They further highlight the financial incentives that motivate the industry, where significant profits stand to be made.
The ‘echo chamber’ effect—where generative AI perpetuates its existing biases and stereotypes as it repurposes its own work to create new material—risks exponentially compounding the problem.
Andrew Tate and others who have championed regressive gender ideas will no doubt be pleased. In the broader context of growing toxic masculinity, image-generative AI is a powerful tool in the ongoing subjugation of women. The rest of us—especially those who care about what gender equality will look like in the near future—should be increasingly alarmed as millions upon millions of AI-generated images embed their biases more deeply into digital culture.
This article was originally published on Newsroom.
Carla Moriarty is a Master of Linguistics graduate from Te Herenga Waka—Victoria University of Wellington.