Facebook is fielding so many problems, oversights, scandals, and other miscellaneous ills that it wouldn’t surprise anyone to hear that its fact-checking program, undertaken last year after the network was confronted with its inaction in controlling disinformation, is falling apart. But in this case the reason you haven’t heard much about it isn’t because it’s a failure, but because fact-checking is boring and thankless — and being done quietly and systematically by people who are just fine with that.
The “falling apart” narrative was advanced in a recent article at The Guardian, and some of the problems noted in that piece are certainly real. But I was curious at the lack of documentation of the fact-checking process itself, so I talked with a couple of the people involved to get a better sense of it.
I definitely didn’t get the impression of a program in crisis at all, but rather one where the necessity of remaining hands-off with the editorial process and teams involved has created both apparent and real apathy when it comes to making real changes.
No bells, no whistles
Facebook likes to pretend that its research into AI will solve just about every problem it has. Unfortunately not only is that AI hugely dependent on human intelligence to work in the first place, but the best it can generally do is forward things on to human agents for final calls. Nowhere is that more obvious than in the process of fact-checking, in which it is trivial for machine learning agents to surface possibly dubious links or articles, but at this stage pretty much impossible for them to do any kind of real evaluation of them.
That’s where the company’s network of independent fact-checkers comes in. No longer among their number are two former Snopes staffers who left to work at another fact-checking concern — pointedly not involved with Facebook — and who clearly had major problems with the way the program worked. Most explosive was the accusation that Facebook had seemingly tried to prioritize fact checks that concerned an advertiser.
But it wasn’t clear from their complaints just how the program does work. I chatted with Snopes head David Mikkelson and checked in with Politifact editor Angie Drobnic Holan. They emphatically denied allegations of Facebook shenanigans, though they had their own reservations, and while they couldn’t provide exact details of the system they used, it sounds pretty straightforward.
“For the most part it’s literally just data entry,” explained Mikkelson. “When we fact-check something, we enter its URL into a database. You could probably dress it up in all kinds of bells and whistles, but we don’t really need or expect much more than that. We haven’t changed what we do or how we do it.”
Mikkelson described the Facebook system in broad terms. It’s a dashboard of links that are surfaced, as Facebook has explained before, primarily through machine learning systems that know what sort of thing to look for: weird URLs, bot promotion, scammy headlines, etc. They appear on the dashboard in some semblance of order, for instance based on traffic or engagement.
“It lists a thumbnail of what the item is, like is it an article or a video; there’s a column for estimated shares, first published date, etc,” said Mikkelson. “They’ve never given us any instructions on like, ‘please do the one with the most shares,’ or ‘do the most recent entry and work your way down,’ or whatever.”
In fact there’s no need to even use the dashboard that way at all.
“There’s no requirement that we undertake anything that’s in their database. If there’s something that isn’t in there, which honestly is most of what we do, we just add it,” Mikkelson said.
Passive partner or puppet master?
I asked whether there was any kind of pushback or interference at all from Facebook, as described by Brooks Binkowski in the Guardian story, who mentioned several such occasions that occurred during her time at Snopes.
Politifact’s Holan said she thought the suggestion was “very misleading.” In a statement, the organization said that “As with all our work, we decide what to fact-check and arrive at our conclusions without input from Facebook or any third party. Any claim suggesting otherwise is misinformed and baseless.”
“I realize Facebook’s reputation is kind of in the dumpster right now already,” Mikkelson said, “but this is damaging to all the fact-checking partners, including us. We would never have continued a working relationship with Facebook or any other partner that told us to couch fact checks in service of advertisers. It’s insulting to suggest.”
The question of receiving compensation for fact-checking was another of Binkowski’s qualms. On the one hand, it could be seen as a conflict of interest for Facebook to be paying for the service, since that opens all kinds of cans of worms — but on the other, it’s ridiculous to suggest this critical work can or should be done for free. Though at first, it was.
When the fact-checking team was first assembled in late 2016, Snopes wrote that it expects “to derive no direct financial benefit from this arrangement.” But eventually it did.
“When we published that, the partnership was in its earliest, embryonic stages — an experiment they’d like our help with,” Mikkelson said. Money “didn’t come up at all.” It wasn’t until the next year that Facebook mentioned paying fact checkers, though it hadn’t announced this publicly, and Snopes eventually did earn and disclose $100,000 coming from the company. Facebook had put bounties on high-profile political stories that were already on Snopes’s radar, as well as others in the fact-checking group.
The money came despite the fact that Snopes never asked for it or billed Facebook — a check arrived at the end of the year, he recalled, “with a note that said ‘vendor refuses to invoice.’ ”
Partners, but not pals
As for the mere concept of working for a company whose slippery methods and unlikeable leadership have been repeatedly pilloried over the last few years, it’s a legitimate concern. But Facebook is too important of a platform to ignore on account of ethical lapses by higher-ups who are not involved in the day-to-day fact-checking operation. Millions of people still look to Facebook for their news.
To abandon the company because (for instance) Sheryl Sandberg hired a dirty PR firm to sling mud at critics would be antithetical to the mission that drove these fact-checking companies to the platform to begin with. After all, it’s not like Facebook had a sterling reputation in 2016, either.
Both Politifact and Snopes indicated that their discontent with the company was more focused on the lack of transparency within the fact-checking program itself. The tools are basic and feedback is nil. Questions like the following have gone unanswered for years:
What constitutes falsity? What criteria should and shouldn’t be considered? How should satire be treated if it is spreading as if it were fact? What about state-sponsored propaganda and disinformation? Have other fact checkers looked at a given story, and could or should their judgments inform the other’s? What is the immediate effect of marking a story false — does it stop spreading? Is there pushback from the community? Is the outlet penalized in other ways? What about protesting an erroneous decision?
The problem with Facebook’s fact-checking operation, as so often is the case with this company, is a lack of transparency with both users and partners. The actual fact-checking happens outside Facebook, and rightly so; it’s not likely to be affected or compromised by the company, and in fact if it tried, it might find the whole thing blowing up in its face. But while the checking itself is tamper-resistant, it’s not clear at all what if any effect it’s having, and how it will be improved or implemented in the future. Surely that’s relevant to everyone with a stake in this process?
Over a year and a half or more of the program, little has been communicated and little has been changed, and that not fast enough. But at the same time, thousands of articles have been checked by experts who are used to having their work go largely unrewarded — and despite Facebook’s lack of transparency with them and us, it seems unlikely that that work has also been ineffective.
For years Facebook was a rat’s nest of trash content and systematically organized disinformation. In many ways, it still is, but an organized fact-checking campaign works like constant friction acting against the momentum of this heap. It’s not flashy and the work will never be done, but it’s no less important for all that.
As with so many other Facebook initiatives, we hear a lot of promises and seldom much in the way of results. The establishment of a group of third parties contributing independently to a fact-checking database was a good step, and it would be surprising to hear it has had no positive affect.
Users and partners deserve to know how it works, whether it’s working, and how it’s being changed. That information would disarm critics and hearten allies. If Facebook continues to defy these basic expectations, however, it only further justifies and intensifies the claims of its worst enemies.
from Social – TechCrunch https://tcrn.ch/2GEieFR
No comments:
Post a Comment