8 Comments
Aug 5, 2022Liked by BOPBadger

Another scenario - recently, I had reason to do a tiny bit of research about Dubai. Never, ' googled it ' before.

Guess what advertisements I was getting for the next few days on my ' news feed ' via my Facebook account . . . . . . ?

If people on this planet still think they are not being watched . . . . . . think again ; or go and chat with those same people as they line up for the shops to open that sell penny farthing bikes, black and white televisions on 4 legs, amongst other ' sought after ' items !

Expand full comment
Aug 5, 2022Liked by BOPBadger

Whatever you do - don't get me started on Facebooks so-called ' community standards. Your scenario re the furniture has struck a chord with me.

Last year, I met a young Italian couple, who were in New Zealand on holiday - we had a similar conversation around this exact topic. They told me how they had been in a café in Auckland, and were discussing their plans to commence a family. The entire conversation was in Italian - yet, as they were in New Zealand on holiday, their phones were still registered to Italy. Before they departed the café,

they were getting advertisements on their phones for baby clothes / shops in Auckland. Although the conversation was in Italian, the advertisements were in English.

Expand full comment

Yes, financial return is one issue. As I mentioned in a recent article on this very topic, Facebook claims to care about protecting users from inappropriate content, yet refuses to pour adequate resources into doing so. The failure of those responsible for determining what is and isn't sexual intent is obviously a huge factor. A highly graphic and realistic sexual image is ok because it is "art" and not real - therefore cannot be harmful, yet a photograph of a real naked person going about some innocuous activity is deemed offensive! No algorithm is going to see past that lunacy!

Expand full comment

The A.I. must be taught to do a particular task, or at least taught to recognize patterns specifically in the dataset it is trained on. The task they have taught current algorithms is to detect body parts, and it does the job well (mostly). With an appropriately labelled dataset of sexualized images vs nonsexualized images where a significant portion of the nonsexualized content included nudity, an A.I. would have no problem learning to tell the difference now.

The problem comes down to (1) how would they draw up the rules, and (2) who is asking them to make such a dataset and train such an A.I.? The algorithms are used to enforce their rules, and their rules are written so that female nipples are bad, male nipples are fine, genitalia is bad, etc. So to enforce the rules they currently have in place, the A.I. is doing the right task. The real technical problem is one of writing an enforceable rule: what is "sexualized image"?

A human can tell the difference, and an A.I. can learn to tell as well as humans given a human-labelled dataset, but how do you write the rule? Also, why? Who is beating down your door asking you to rewrite the rule anyway? How will it make your company more money?

Expand full comment
deletedJul 28, 2022Liked by BOPBadger
Comment deleted
Expand full comment