Thank you for a very informative explanation of the automated labeling algorithms.
On the topic of censorship becoming more prevalent and restrictive I will share with you an example of a very restrictive and badly designed censorship system. In my not so great state of Florida a new school censorship law has taken effect. It takes only one person to complain about a book containing content unsuitable for school children to have the book pulled from the library for review. The complainer need not reveal the alleged offending content. A review panel must read the entire book looking for any offending content. As you might expect, there are hundreds of books in some counties awaiting review. In Escambia county, adjacent to my home county, one woman alone is responsible for the majority of complaints. One person!
It surprises me that in a litigious society like the US, some naturist with a legal background hasn't taken a case against the woman for vexatious complaints. I do like, however, that the lawmakers have to actually read the book that they are ruling on. At least they should be aware of the context of the content they are looking to restrict.
We live in a culture of offence, where anyone can restrict the activity of others, and at some point I expect the pendulum to swing back towards common sense. I hope it won't be long.
I didn't get too far into reading this, when I was reminded of a saying which became ' common-phrase ' once computers began to become ' the norm ' . . . . . " put shit in, get shit out " !
In this conversation re AI, I think the same applies ; and the bias of the programmers etc becomes an obvious challenge for people such as ourselves ( sadly for us ! ).
During summer of 2022 / 2023, I was a very naughty boy and got banned for 10 hours by the Facebook police for putting up a link in a comment to assist a fellow naturist, which included some photo's ;
For what it was worth, I did challenge the ruling from the Facebook police, albeit that I already knew that it could be seen as a waste of time. However, I subscribed to what Steve has said within this blog ; "If we don't do it, nobody will, and that might make things worse "
Most people can't discern discern between non-sexual nudity and explicit content. Training AI is even less likely. It will inevitably match the biases of whoever sets up the training.
As the different people will have number of thoughts about any one topic, similarly as AI being generated by group of programmers, they may have the different opinions about nudity. The varying thoughts will remain forever.
Unfortunately, my experience of appealing against the categorisation that the AI system is applying to my photos, is that the humans reviewing the labels have the same view of nudity as the AI system. In other words, my pictures remain as "sexually explicit" images.
AI can be taught. Remember, though, it is supervised by humans with their own perceptions. We still need to train the public the difference, as schooling and parents still have biases.
Will the AI trainers allow facts to change their opinions for the better?
Unfortunately, "social media" in the U.S. and most parts of the world seem to have sunk to the septic tank level. NOT because of too much "sensitive" or "explicit" content - but instead because censorship is spreading like wildfire. Anything that threatens the delicate sensitivities of a large enough group of people must be vigorously suppressed by teams of "content moderators" or increasingly by "AI" systems that simply incorporate the prejudices of the majority. Yet at the same time, pervasive "advertising" that financially supports social media continues becoming clickbait sewage.
In the U.S. and other "modern" societies, it's mega-scale businesses that call the tunes, while in authoritarian and less modern societies it's the government that does. The results are largely the same. The general public is allowed to read or see only what the powers that be don't censor.
How this situation impacts naturism is painfully obvious. Every time some moderator or algorithm decrees that naturist content is unfit for the "normal" person to encounter, the negative opinions of "normal" people towards naturism are reinforced. Naturists (and other minorities) are muzzled because their opinions - especially in the form of visual images - might upset too many others. This problem doesn't affect only naturists in the U.S. Books are rapidly being banned from school and public libraries across at least half of U.S. states if they deal with controversial lifestyles. "Normal" people mustn't be exposed to anything that might disturb them. Especially if the profits or power of the society's rulers could be impacted.
Just a few decades ago it seemed possible that early social media could provide users with a positive view of naturism (and other legitimate but controversial lifestyles). But now the garbage pervading most social media is doing the exact opposite.
Training AI's is ongoing, not a one-and-done affair. For example, it was recently discovered that illegal images of children were in the data used to train ChatGPT for image generation. They were able to search the data for the images, remove them and retrain the AI. In a neural network, the training data and the network the AI develops are always separate, so it's never too late to "fix" an AI, but there have to be 1) a will to make changes and 2) agreement on what the changes should be. When it comes to nudity, I'm not sure we have either as some people DO believe that nudity equals pornography and they have no will to change a system that perpetuates that viewpoint.
The specialised AIs (for finding cancers and such like) are a great step forward. The generalised AIs (like ChatGPT) worry me because they don’t seem to have boundaries and will cheerfully present fiction as fact. Herein lies the problem - the origin of an AI program is still a computer program, born of a computer programmer, who is human. The rules of engagement, flaws and biases of the supposedly indifferent AI are therefore not so indifferent.
There also seems to be a media bias towards describing any computer program now as AI, blurring perception and understanding as to what AI actually is.
The cherry on the top is the ready acceptance of society and business (and government!!) to treat the computer as infallible. Computer says “no” and that’s that. The human dimension is vanishing from decision-making. So add a flawed program to unthinking acceptance of its output and we get ... ? (And in case a real-life example is needed, check out the UK’s current scandal involving the Post Office computer system)
It is not lawmakers who review the books but instead a panel of educators whose cultural and political views are unknown
Thank you for a very informative explanation of the automated labeling algorithms.
On the topic of censorship becoming more prevalent and restrictive I will share with you an example of a very restrictive and badly designed censorship system. In my not so great state of Florida a new school censorship law has taken effect. It takes only one person to complain about a book containing content unsuitable for school children to have the book pulled from the library for review. The complainer need not reveal the alleged offending content. A review panel must read the entire book looking for any offending content. As you might expect, there are hundreds of books in some counties awaiting review. In Escambia county, adjacent to my home county, one woman alone is responsible for the majority of complaints. One person!
It surprises me that in a litigious society like the US, some naturist with a legal background hasn't taken a case against the woman for vexatious complaints. I do like, however, that the lawmakers have to actually read the book that they are ruling on. At least they should be aware of the context of the content they are looking to restrict.
We live in a culture of offence, where anyone can restrict the activity of others, and at some point I expect the pendulum to swing back towards common sense. I hope it won't be long.
I didn't get too far into reading this, when I was reminded of a saying which became ' common-phrase ' once computers began to become ' the norm ' . . . . . " put shit in, get shit out " !
In this conversation re AI, I think the same applies ; and the bias of the programmers etc becomes an obvious challenge for people such as ourselves ( sadly for us ! ).
During summer of 2022 / 2023, I was a very naughty boy and got banned for 10 hours by the Facebook police for putting up a link in a comment to assist a fellow naturist, which included some photo's ;
https://www.haurakinaturally.nz/
Check it out - make up your own minds . . . . . .
For what it was worth, I did challenge the ruling from the Facebook police, albeit that I already knew that it could be seen as a waste of time. However, I subscribed to what Steve has said within this blog ; "If we don't do it, nobody will, and that might make things worse "
" Best Wishes to All for 2024 "
Most people can't discern discern between non-sexual nudity and explicit content. Training AI is even less likely. It will inevitably match the biases of whoever sets up the training.
As the different people will have number of thoughts about any one topic, similarly as AI being generated by group of programmers, they may have the different opinions about nudity. The varying thoughts will remain forever.
Unfortunately, my experience of appealing against the categorisation that the AI system is applying to my photos, is that the humans reviewing the labels have the same view of nudity as the AI system. In other words, my pictures remain as "sexually explicit" images.
Well if we do nothing, what outcomes can we expect?
The conversation is a great starter.
AI can be taught. Remember, though, it is supervised by humans with their own perceptions. We still need to train the public the difference, as schooling and parents still have biases.
Will the AI trainers allow facts to change their opinions for the better?
Unfortunately, "social media" in the U.S. and most parts of the world seem to have sunk to the septic tank level. NOT because of too much "sensitive" or "explicit" content - but instead because censorship is spreading like wildfire. Anything that threatens the delicate sensitivities of a large enough group of people must be vigorously suppressed by teams of "content moderators" or increasingly by "AI" systems that simply incorporate the prejudices of the majority. Yet at the same time, pervasive "advertising" that financially supports social media continues becoming clickbait sewage.
In the U.S. and other "modern" societies, it's mega-scale businesses that call the tunes, while in authoritarian and less modern societies it's the government that does. The results are largely the same. The general public is allowed to read or see only what the powers that be don't censor.
How this situation impacts naturism is painfully obvious. Every time some moderator or algorithm decrees that naturist content is unfit for the "normal" person to encounter, the negative opinions of "normal" people towards naturism are reinforced. Naturists (and other minorities) are muzzled because their opinions - especially in the form of visual images - might upset too many others. This problem doesn't affect only naturists in the U.S. Books are rapidly being banned from school and public libraries across at least half of U.S. states if they deal with controversial lifestyles. "Normal" people mustn't be exposed to anything that might disturb them. Especially if the profits or power of the society's rulers could be impacted.
Just a few decades ago it seemed possible that early social media could provide users with a positive view of naturism (and other legitimate but controversial lifestyles). But now the garbage pervading most social media is doing the exact opposite.
Training AI's is ongoing, not a one-and-done affair. For example, it was recently discovered that illegal images of children were in the data used to train ChatGPT for image generation. They were able to search the data for the images, remove them and retrain the AI. In a neural network, the training data and the network the AI develops are always separate, so it's never too late to "fix" an AI, but there have to be 1) a will to make changes and 2) agreement on what the changes should be. When it comes to nudity, I'm not sure we have either as some people DO believe that nudity equals pornography and they have no will to change a system that perpetuates that viewpoint.
The specialised AIs (for finding cancers and such like) are a great step forward. The generalised AIs (like ChatGPT) worry me because they don’t seem to have boundaries and will cheerfully present fiction as fact. Herein lies the problem - the origin of an AI program is still a computer program, born of a computer programmer, who is human. The rules of engagement, flaws and biases of the supposedly indifferent AI are therefore not so indifferent.
There also seems to be a media bias towards describing any computer program now as AI, blurring perception and understanding as to what AI actually is.
The cherry on the top is the ready acceptance of society and business (and government!!) to treat the computer as infallible. Computer says “no” and that’s that. The human dimension is vanishing from decision-making. So add a flawed program to unthinking acceptance of its output and we get ... ? (And in case a real-life example is needed, check out the UK’s current scandal involving the Post Office computer system)