The New Yorker October 19, 2020 pp20-27 |A Reporter At Large| “EXPLICIT CONTENT” “Facebook is full of hate speech and fake news. Does it want to fix the problem?” “Critics argue that the company’s algorithms are an existential threat to democracy” By Andrew Marantz.
Image credit news.sky.com
2244 has been following the issue of moderating social media and continues an interest in this surprisingly complex conundrum. The article is full of great examples and a good read for the uninitiated.
Summary of Article
After its inception Facebook (FB) began to address “what was allowed…[on FB]…and what was not.” This has gone from “following your gut”-if it feels bad take it down to the current day “Implementation Standards [that] comprise an ever-changing wiki, roughly twelve thousand words long with the twenty-four headings-‘Hate Speech’, ‘Bullying’, ‘Harrassment’” etc. Users get a sanitized version called “Community Standards” which says “We remove content that glorifies violence” while the internal document is explicit no graphics of “charred or burning human beings” etc. to be marked as “disturbing” but not to be taken down.
From FB’s POV their mission is “to bring the world closer together.” FB opines that it is “ a neutral platform, not a publisher, and so has resisted censoring its users’ speech, even when that speech is ugly or unpopular.” But as it turns out it’s that old adage money-changes-everything. FB, Twitter, YouTube all benefit from advertising and advertisers pay based on exposure or clicks. So influential users, whose posts create lots of interest that trigger FB’s algorithm, are featured even if they violate content rules up-to-the-point where they become a”#PRFire.” “Often, offending content has been flagged repeatedly [by moderators independently or after user complaints], to no avail, but, in the context of a press fire, it receives prompt attention.” This nets-out that influential users and especially politicians including popular despots get free range and FB continually tweaks their rules to accommodate. In defense of FB and other social media platforms they do walk a fine-line on political censorship versus free speech. This latter point, on censorship, has become an American right-wing talking point although hard data shows that their content actually is highlighted rather than suppressed again because the algorithm is designed to feature such popular posts.
There is an army of moderators, mostly poorly-paid contracted workers, positioned around the world to accommodate all time zones. Moderators, besides being faced with stressful viewing and judging on heinous content kicked to them by artificial intelligence, find themselves essentially having to go-along-to-get-along on much of the political content. If they suggest removeing a post it can then be rejected by the next rung of management. If a moderator’s request to take-down is rejected often enough their ranking declines increasing the risk of dismissal. Interestingly, when content is successfully rejected, those that post learn how to issue essentially the same content by learning to walk-the-line of what is acceptable by removing buzz words or elements that trigger rejection.