Another scenario - recently, I had reason to do a tiny bit of research about Dubai. Never, ' googled it ' before.
Guess what advertisements I was getting for the next few days on my ' news feed ' via my Facebook account . . . . . . ?
If people on this planet still think they are not being watched . . . . . . think again ; or go and chat with those same people as they line up for the shops to open that sell penny farthing bikes, black and white televisions on 4 legs, amongst other ' sought after ' items !
Whatever you do - don't get me started on Facebooks so-called ' community standards. Your scenario re the furniture has struck a chord with me.
Last year, I met a young Italian couple, who were in New Zealand on holiday - we had a similar conversation around this exact topic. They told me how they had been in a café in Auckland, and were discussing their plans to commence a family. The entire conversation was in Italian - yet, as they were in New Zealand on holiday, their phones were still registered to Italy. Before they departed the café,
they were getting advertisements on their phones for baby clothes / shops in Auckland. Although the conversation was in Italian, the advertisements were in English.
Yes, financial return is one issue. As I mentioned in a recent article on this very topic, Facebook claims to care about protecting users from inappropriate content, yet refuses to pour adequate resources into doing so. The failure of those responsible for determining what is and isn't sexual intent is obviously a huge factor. A highly graphic and realistic sexual image is ok because it is "art" and not real - therefore cannot be harmful, yet a photograph of a real naked person going about some innocuous activity is deemed offensive! No algorithm is going to see past that lunacy!
The A.I. must be taught to do a particular task, or at least taught to recognize patterns specifically in the dataset it is trained on. The task they have taught current algorithms is to detect body parts, and it does the job well (mostly). With an appropriately labelled dataset of sexualized images vs nonsexualized images where a significant portion of the nonsexualized content included nudity, an A.I. would have no problem learning to tell the difference now.
The problem comes down to (1) how would they draw up the rules, and (2) who is asking them to make such a dataset and train such an A.I.? The algorithms are used to enforce their rules, and their rules are written so that female nipples are bad, male nipples are fine, genitalia is bad, etc. So to enforce the rules they currently have in place, the A.I. is doing the right task. The real technical problem is one of writing an enforceable rule: what is "sexualized image"?
A human can tell the difference, and an A.I. can learn to tell as well as humans given a human-labelled dataset, but how do you write the rule? Also, why? Who is beating down your door asking you to rewrite the rule anyway? How will it make your company more money?
Thank you for your comments. I guess it all boils down to money being the reason for change or the return on investment. If I spend money to make changes to the dataset, will it earn the company more, or if I fail to make the changes, will it potentially cost? The injustices of the current systems possibly only matter to a small minority, so there is no incentive to change.
Perhaps you are right. The only downside I see is that moderation will rely on volunteers or the site will require a subscription basis to fund the role.
Another scenario - recently, I had reason to do a tiny bit of research about Dubai. Never, ' googled it ' before.
Guess what advertisements I was getting for the next few days on my ' news feed ' via my Facebook account . . . . . . ?
If people on this planet still think they are not being watched . . . . . . think again ; or go and chat with those same people as they line up for the shops to open that sell penny farthing bikes, black and white televisions on 4 legs, amongst other ' sought after ' items !
So glad I quit Facebook last year, although Instagram and Google ads seem just as invasive.
Whatever you do - don't get me started on Facebooks so-called ' community standards. Your scenario re the furniture has struck a chord with me.
Last year, I met a young Italian couple, who were in New Zealand on holiday - we had a similar conversation around this exact topic. They told me how they had been in a café in Auckland, and were discussing their plans to commence a family. The entire conversation was in Italian - yet, as they were in New Zealand on holiday, their phones were still registered to Italy. Before they departed the café,
they were getting advertisements on their phones for baby clothes / shops in Auckland. Although the conversation was in Italian, the advertisements were in English.
That is really spooky. Glad it's not just me being paranoid.
Yes, financial return is one issue. As I mentioned in a recent article on this very topic, Facebook claims to care about protecting users from inappropriate content, yet refuses to pour adequate resources into doing so. The failure of those responsible for determining what is and isn't sexual intent is obviously a huge factor. A highly graphic and realistic sexual image is ok because it is "art" and not real - therefore cannot be harmful, yet a photograph of a real naked person going about some innocuous activity is deemed offensive! No algorithm is going to see past that lunacy!
The A.I. must be taught to do a particular task, or at least taught to recognize patterns specifically in the dataset it is trained on. The task they have taught current algorithms is to detect body parts, and it does the job well (mostly). With an appropriately labelled dataset of sexualized images vs nonsexualized images where a significant portion of the nonsexualized content included nudity, an A.I. would have no problem learning to tell the difference now.
The problem comes down to (1) how would they draw up the rules, and (2) who is asking them to make such a dataset and train such an A.I.? The algorithms are used to enforce their rules, and their rules are written so that female nipples are bad, male nipples are fine, genitalia is bad, etc. So to enforce the rules they currently have in place, the A.I. is doing the right task. The real technical problem is one of writing an enforceable rule: what is "sexualized image"?
A human can tell the difference, and an A.I. can learn to tell as well as humans given a human-labelled dataset, but how do you write the rule? Also, why? Who is beating down your door asking you to rewrite the rule anyway? How will it make your company more money?
Thank you for your comments. I guess it all boils down to money being the reason for change or the return on investment. If I spend money to make changes to the dataset, will it earn the company more, or if I fail to make the changes, will it potentially cost? The injustices of the current systems possibly only matter to a small minority, so there is no incentive to change.
Perhaps you are right. The only downside I see is that moderation will rely on volunteers or the site will require a subscription basis to fund the role.