There seems to have been a change recently on the Bluesky platform. When I joined Bluesky in September 2023, there was (and still is) the facility to label your images before posting them.
There are four settings you can categorise your images as.
Select nothing and it will be considered suitable for all ages, select “Suggestive” for pictures meant for adults, select “Nudity” for artistic or non-erotic nudity or select “Porn” for sexual activity or erotic nudity.
While not a perfect system, as one person’s nudity is another person's pornography, it seemed a good attempt at self-regulation and filtering out content that others might not want to see.
There was a school of thought amongst some members of the Bluesky nudist community that there was no point labelling their images as there is nothing inherently wrong with simple non-sexual nudity and that adding labels to their image perpetuated the lie that nudity was something to be ashamed of.
I took a more conciliatory approach and decided to label my images so as not to upset any of the more delicate members of the wider Bluesky community.
For the system to work best, it required individual users to set their own moderation settings to filter the content that they see. Users can choose to hide, warn or show posts that fall under the following categories
Explicit Sexual Images
i.e. pornography
Other Nudity
Including non-sexual and artisticSexually Suggestive
Does not include nudityViolent / Bloody
Gore, self-harm, tortureHate Group Iconography
Images of terror groups, articles covering events, etc.Spam
Excessive unwanted interactionsImpersonation
Accounts falsely claiming to be people or organisations
With regard to the nudity categories, it seems that these match reasonably well with the settings available to label images.
I am not sure how Bluesky labels or decides on violence or hate speech but I assume there are AI filters that scan the content.
All of this seemed to be working smoothly until around the beginning of December 2023 I posted an image and shortly after, my post had a warning that
“This post has been labelled”
with a link to appeal the decision.
The post was an AI-generated image of a naked couple standing in front of a Christmas tree that I had posted as part of a conversation about how some AI image generators are getting some very simple things wrong like the number of fingers or arms people have. In this instance, the AI had drawn the female of the couple quite well. The bearded male seemed to have had an unfortunate accident to his genitals which appeared to have been reconstructed based on female anatomy rather than male, although the scrotum seemed intact.
Before posting the image, I labelled it containing “Nudity (artistic or non-sexual)” as was appropriate.
The irony that an AI algorithm had deemed inappropriate a non-sexual image generated by an AI algorithm did not escape me.
I appealed the decision based on the fact that the original image was obviously AI-generated, had been labelled as containing nudity and had no sexual content nor was it explicit or even slightly sexually suggestive.
I have continued to label my Bluesky images according to the options given to me by the platform, and all images that contain a hint of nudity, even images where genitals are not shown, have been tagged by their AI as “labelled” with an option to appeal. Some of those appeals have been successful, but not all.
A number of my naturist connections on the platform had noticed the same messages appearing from similar posts and there were some discussions around what was happening and what could be done about the obvious mislabelling of non-sexual nudity posts.
It was pointed out to me by a fellow Bluesky member, that the images were being relabelled as “This post contains explicit sexual images” and while this is not easily discerned on Bluesky, the Graysky app clearly shows the warning. Graysky is a 3rd party Bluesky client with additional features.
To check the system I posted a simple black and white image of myself working in my garage, with no genitals visible. Sure enough, within a few minutes, the image came up with the “This image has been labelled” warning with the option to appeal. I checked the image on the Graysky application, and there it was complete with the warning that the post contained explicit sexual images.
Interestingly, since updating the app on my phone, the “Your image has been labelled” warning no longer appears, and the “appeal the decision” button has been hidden in the menu options.
If you have your Bluesky moderation settings to warn or hide “Sexually Suggestive” and “Explicit Sexual Images” and to show “Other Nudity”, and you label your images appropriately, then any of your images that subsequently show a warning are likely to have been recategorised.
Bluesky is, of course, allowed to use whatever system it likes to moderate the platform, and as users of the service we have to abide by its conditions.
AI needs feedback to refine its effectiveness. If errors are not challenged early on then we risk amplifying these biases and embedding them permanently into their outputs.
It is incumbent on all of us to challenge the AI ranking of our naturist images where they have been labelled explicitly sexual, not just in Bluesky but across all situations where AI is trying to categorise nudity. If we do nothing, then the inability to separate non-sexual nudity from explicit content is perpetuated.
“AI is good at describing the world as it is today with all of its biases, but it does not know how the world should be.” — Joanne Chen
Thank you for reading. Have a comfortable day.
Next Week:
Use it or lose it.
Giving people a reason to join in.
The specialised AIs (for finding cancers and such like) are a great step forward. The generalised AIs (like ChatGPT) worry me because they don’t seem to have boundaries and will cheerfully present fiction as fact. Herein lies the problem - the origin of an AI program is still a computer program, born of a computer programmer, who is human. The rules of engagement, flaws and biases of the supposedly indifferent AI are therefore not so indifferent.
There also seems to be a media bias towards describing any computer program now as AI, blurring perception and understanding as to what AI actually is.
The cherry on the top is the ready acceptance of society and business (and government!!) to treat the computer as infallible. Computer says “no” and that’s that. The human dimension is vanishing from decision-making. So add a flawed program to unthinking acceptance of its output and we get ... ? (And in case a real-life example is needed, check out the UK’s current scandal involving the Post Office computer system)
Training AI's is ongoing, not a one-and-done affair. For example, it was recently discovered that illegal images of children were in the data used to train ChatGPT for image generation. They were able to search the data for the images, remove them and retrain the AI. In a neural network, the training data and the network the AI develops are always separate, so it's never too late to "fix" an AI, but there have to be 1) a will to make changes and 2) agreement on what the changes should be. When it comes to nudity, I'm not sure we have either as some people DO believe that nudity equals pornography and they have no will to change a system that perpetuates that viewpoint.