This was originally a comment to Sophie Dockx: Quora Moderation is Under Attack : https://insurgency.quora.com/Sop… . Republishing here with permission from Dan Rosenthal. (See also my comment in the original place published.)
I’d take any claims of unjust banning from controversial users such as Sophie with an enormous grain of salt. Look, I’ve worked in online community management for a long time, including administrating/moderating on social media outlets WAY bigger than Quora; I’ve also represented clients in consumer protection claims when they come to me wanting to sue a website for “wrongfully banning” them. 99% of the time when someone says they were banned unfairly, they are wrong. Sometimes it’s intentional attempts to deceive; other times it’s just human nature of an inability to admit one’s mistakes. But it’s very, very rare to actually see someone having been permanently banned by mistake. And even in the cases I’ve seen on Quora where a ban was unquestionably by mistake, they’ve been reversed within a day or two.
Now, this is not to say things are lovely in Quora Moderation land. Quora Moderation is all kinds of broken and insufficient; but I have to laugh at the concept that it’s because they’re banning *too many* people.
Fake names are a problem, but on their own — absent any other bad behavior — the only impact they have is making it more difficult to assess a user’s credibility.
The real problem is that fake names are highly correlated w/ malicious users intent solely (or largely) on bad behavior. And Quora’s small moderation team is not doing a good enough job of identifying potential trouble users and flagging them in such a way that they can be quickly moderated. Instead, almost all users — good and bad — appear to be treated as part of the same pool, causing moderator overload.
Put it this way — when triaging an accident scene, do you think the doctor should start w/ the people showing no outward signs of injury, or the people bleeding profusely?
There are plenty of ways this can be done. Machine Learning is something Quora keeps harping on about, to the point of writing public articles about it. Yet Quora’s ML for moderation is, frankly, trash. A proper, robust system, would be self-correcting from analyzing the patterns of manually-banned users and increasingly flagging similar accounts as potential threat vectors. A proper, robust system, would keep this pool segregated from the general population of Quora users, so they can be monitored and their interactions can be vetted. We see the first halting steps at this with the anonymity review period, for instance; but it could be so much more.
Meanwhile, there is no parallel process to handle direct reporting of actual confirmed problem users post-incident, from trusted users, because frankly even those of us with access to moderation resources aren’t getting responses from said resources. I’ve personally made Quora moderation staff aware of a well known user, with evidence of that person openly admitting to using multiple sockpuppet accounts to harass me, combined with contextual evidence analyzed from their writing. Despite personally directing a moderator’s attention to that, not only has no action been taking, it didn’t even merit a response.
So no, I don’t think the problem is that Quora is banning the wrong people. I think the problem is that Quora is not banning nearly enough people, and as a result errors are visibly magnified because they’re not being measured against any noticeable progress.